scispace - formally typeset
Search or ask a question

Showing papers by "University of California, Santa Cruz published in 2008"


Journal ArticleDOI
17 Jan 2008-Nature
TL;DR: Past episodes of greenhouse warming provide insight into the coupling of climate and the carbon cycle and thus may help to predict the consequences of unabated carbon emissions in the future.
Abstract: Past episodes of greenhouse warming provide insight into the coupling of climate and the carbon cycle and thus may help to predict the consequences of unabated carbon emissions in the future.

2,771 citations


Journal ArticleDOI
TL;DR: A nanopore-based device provides single-molecule detection and analytical capabilities that are achieved by electrophoretically driving molecules in solution through a nano-scale pore, a unique analytical capability that makes inexpensive, rapid DNA sequencing a possibility.
Abstract: A nanopore-based device provides single-molecule detection and analytical capabilities that are achieved by electrophoretically driving molecules in solution through a nano-scale pore. The nanopore provides a highly confined space within which single nucleic acid polymers can be analyzed at high throughput by one of a variety of means, and the perfect processivity that can be enforced in a narrow pore ensures that the native order of the nucleobases in a polynucleotide is reflected in the sequence of signals that is detected. Kilobase length polymers (single-stranded genomic DNA or RNA) or small molecules (e.g., nucleosides) can be identified and characterized without amplification or labeling, a unique analytical capability that makes inexpensive, rapid DNA sequencing a possibility. Further research and development to overcome current challenges to nanopore identification of each successive nucleotide in a DNA strand offers the prospect of 'third generation' instruments that will sequence a diploid mammalian genome for ∼$1,000 in ∼24 h.

2,512 citations


Journal ArticleDOI
TL;DR: In this article, the authors test meta-analytically the three most studied mediators: contact reduces prejudice by enhancing knowledge about the outgroup, reducing anxiety about intergroup contact, and increasing empathy and perspective taking.
Abstract: Recent years have witnessed a renewal of interest in intergroup contact theory. A meta-analysis of more than 500 studies established the theory's basic contention that intergroup contact typically reduces prejudices of many types. This paper addresses the issue of process: just how does contact diminish prejudice? We test meta-analytically the three most studied mediators: contact reduces prejudice by (1) enhancing knowledge about the outgroup, (2) reducing anxiety about intergroup contact, and (3) increasing empathy and perspective taking. Our tests reveal mediational effects for all three of these mediators. However, the mediational value of increased knowledge appears less strong than anxiety reduction and empathy. Limitations of the study and implications of the results are discussed. Copyright © 2008 John Wiley & Sons, Ltd.

1,886 citations


Journal ArticleDOI
TL;DR: These three-stressor results suggest that synergies may be quite common in nature where more than two stressors almost always coexist and suggest an immediate need to account for stressor interactions in ecological studies and conservation planning.
Abstract: Humans impact natural systems in a multitude of ways, yet the cumulative effect of multiple stressors on ecological communities remains largely unknown. Here we synthesized 171 studies that manipulated two or more stressors in marine and coastal systems and found that cumulative effects in individual studies were additive (26%), synergistic (36%), and antagonistic (38%). The overall interaction effect across all studies was synergistic, but interaction type varied by response level (community: antagonistic, population: synergistic), trophic level (autotrophs: antagonistic, heterotrophs: synergistic), and specific stressor pair (seven pairs additive, three pairs each synergistic and antagonistic). Addition of a third stressor changed interaction effects significantly in two-thirds of all cases and doubled the number of synergistic interactions. Given that most studies were performed in laboratories where stressor effects can be carefully isolated, these three-stressor results suggest that synergies may be quite common in nature where more than two stressors almost always coexist. While significant gaps exist in multiple stressor research, our results suggest an immediate need to account for stressor interactions in ecological studies and conservation planning.

1,685 citations


Journal ArticleDOI
TL;DR: In this paper, the authors reported new precision measurements of the properties of our Galaxy's supermassive black hole, based on astrometric and radial velocity (RV; 2000-2007) measurements from the W. M. Keck 10m telescopes.
Abstract: We report new precision measurements of the properties of our Galaxy's supermassive black hole. Based on astrometric (1995-2007) and radial velocity (RV; 2000-2007) measurements from the W. M. Keck 10m telescopes, a fully unconstrained Keplerian orbit for the short-period star S0-2 provides values for the distance (R_0) of 8.0±0.6 kpc, the enclosed mass (M_(bh)) of 4.1±0.6x10^6 M☉ and the black hole's RV, which is consistent with zero with 30 km/s uncertainty. If the black hole is assumed to be at rest with respect to the Galaxy (e. g., has no massive companion to induce motion), we can further constrain the fit, obtaining R_0 = 8.4±0.4kpc and M_(bh) 4.5±0.4x10^6 M☉. More complex models constrain the extended dark mass distribution to be less than 3-4x10^5 M☉ within 0.01 pc, ~100 times higher than predictions from stellar and stellar remnant models. For all models, we identify transient astrometric shifts from source confusion (up to 5 times the astrometric error) and the assumptions regarding the black hole's radial motion as previously unrecognized limitations on orbital accuracy and the usefulness of fainter stars. Future astrometric and RV observations will remedy these effects. Our estimates of R_0 and the Galaxy's local rotation speed, which it is derived from combining R_0 with the apparent proper motion of Sgr A*, (θ_0 = 229±18 km/s), are compatible with measurements made using other methods. The increased black hole mass found in this study, compared to that determined using projected mass estimators, implies a longer period for the innermost stable orbit, longer resonant relaxation timescales for stars in the vicinity of the black hole and a better agreement with the M_(bh)-σ relation.

1,677 citations


Journal ArticleDOI
Jennifer K. Adelman-McCarthy1, Marcel A. Agüeros2, S. Allam1, S. Allam3  +170 moreInstitutions (65)
TL;DR: The Sixth Data Release of the Sloan Digital Sky Survey (SDS) as discussed by the authors contains images and parameters of roughly 287 million objects over 9583 deg(2), including scans over a large range of Galactic latitudes and longitudes.
Abstract: This paper describes the Sixth Data Release of the Sloan Digital Sky Survey. With this data release, the imaging of the northern Galactic cap is now complete. The survey contains images and parameters of roughly 287 million objects over 9583 deg(2), including scans over a large range of Galactic latitudes and longitudes. The survey also includes 1.27 million spectra of stars, galaxies, quasars, and blank sky ( for sky subtraction) selected over 7425 deg2. This release includes much more stellar spectroscopy than was available in previous data releases and also includes detailed estimates of stellar temperatures, gravities, and metallicities. The results of improved photometric calibration are now available, with uncertainties of roughly 1% in g, r, i, and z, and 2% in u, substantially better than the uncertainties in previous data releases. The spectra in this data release have improved wavelength and flux calibration, especially in the extreme blue and extreme red, leading to the qualitatively better determination of stellar types and radial velocities. The spectrophotometric fluxes are now tied to point-spread function magnitudes of stars rather than fiber magnitudes. This gives more robust results in the presence of seeing variations, but also implies a change in the spectrophotometric scale, which is now brighter by roughly 0.35 mag. Systematic errors in the velocity dispersions of galaxies have been fixed, and the results of two independent codes for determining spectral classifications and red-shifts are made available. Additional spectral outputs are made available, including calibrated spectra from individual 15 minute exposures and the sky spectrum subtracted from each exposure. We also quantify a recently recognized underestimation of the brightnesses of galaxies of large angular extent due to poor sky subtraction; the bias can exceed 0.2 mag for galaxies brighter than r = 14 mag.

1,602 citations


Journal ArticleDOI
TL;DR: It is concluded that management limiting gene flow among introduced populations may reduce adaptive potential but is unlikely to prevent expansion or the evolution of novel invasive behaviour.
Abstract: Invasive species are predicted to suffer from reductions in genetic diversity during founding events, reducing adaptive potential. Integrating evidence from two literature reviews and two case studies, we address the following questions: How much genetic diversity is lost in invasions? Do multiple introductions ameliorate this loss? Is there evidence for loss of diversity in quantitative traits? Do invaders that have experienced strong bottlenecks show adaptive evolution? How do multiple introductions influence adaptation on a landscape scale? We reviewed studies of 80 species of animals, plants, and fungi that quantified nuclear molecular diversity within introduced and source populations. Overall, there were significant losses of both allelic richness and heterozygosity in introduced populations, and large gains in diversity were rare. Evidence for multiple introductions was associated with increased diversity, and allelic variation appeared to increase over long timescales (~100 years), suggesting a role for gene flow in augmenting diversity over the long-term. We then reviewed the literature on quantitative trait diversity and found that broad-sense variation rarely declines in introductions, but direct comparisons of additive variance were lacking. Our studies of Hypericum canariense invasions illustrate how populations with diminished diversity may still evolve rapidly. Given the prevalence of genetic bottlenecks in successful invading populations and the potential for adaptive evolution in quantitative traits, we suggest that the disadvantages associated with founding events may have been overstated. However, our work on the successful invader Verbascum thapsus illustrates how multiple introductions may take time to commingle, instead persisting as a 'mosaic of maladaptation' where traits are not distributed in a pattern consistent with adaptation. We conclude that management limiting gene flow among introduced populations may reduce adaptive potential but is unlikely to prevent expansion or the evolution of novel invasive behaviour.

1,588 citations


Book ChapterDOI
20 Oct 2008
TL;DR: It is shown how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm, which allows many different kinds of simple features to be combined into a single similarity function.
Abstract: Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians.

1,554 citations


Journal ArticleDOI
21 Aug 2008-Nature
TL;DR: The functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells is analysed using a model of multi-neuron spike responses, and a model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlation activity in populations of neurons.
Abstract: Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.

1,465 citations


Journal ArticleDOI
TL;DR: A new compilation of Type Ia supernovae (SNe Ia), a new data set of low-redshift nearby-Hubble-flow SNe, and new analysis procedures to work with these heterogeneous compilations is presented in this article.
Abstract: We present a new compilation of Type Ia supernovae (SNe Ia), a new data set of low-redshift nearby-Hubble-flow SNe, and new analysis procedures to work with these heterogeneous compilations This "Union" compilation of 414 SNe Ia, which reduces to 307 SNe after selection cuts, includes the recent large samples of SNe Ia from the Supernova Legacy Survey and ESSENCE Survey, the older data sets, as well as the recently extended data set of distant supernovae observed with the Hubble Space Telescope (HST) A single, consistent, and blind analysis procedure is used for all the various SN Ia subsamples, and a new procedure is implemented that consistently weights the heterogeneous data sets and rejects outliers We present the latest results from this Union compilation and discuss the cosmological constraints from this new compilation and its combination with other cosmological measurements (CMB and BAO) The constraint we obtain from supernovae on the dark energy density is ΩΛ = 0713+ 0027−0029(stat)+ 0036−0039(sys) , for a flat, ΛCDM universe Assuming a constant equation of state parameter, w, the combined constraints from SNe, BAO, and CMB give w = − 0969+ 0059−0063(stat)+ 0063−0066(sys) While our results are consistent with a cosmological constant, we obtain only relatively weak constraints on a w that varies with redshift In particular, the current SN data do not yet significantly constrain w at z > 1 With the addition of our new nearby Hubble-flow SNe Ia, these resulting cosmological constraints are currently the tightest available

1,420 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used the photometric parallax method to estimate the distances to ~48 million stars detected by the Sloan Digital Sky Survey (SDSS) and map their three-dimensional number density distribution in the Galaxy.
Abstract: Using the photometric parallax method we estimate the distances to ~48 million stars detected by the Sloan Digital Sky Survey (SDSS) and map their three-dimensional number density distribution in the Galaxy. The currently available data sample the distance range from 100 pc to 20 kpc and cover 6500 deg2 of sky, mostly at high Galactic latitudes (|b| > 25). These stellar number density maps allow an investigation of the Galactic structure with no a priori assumptions about the functional form of its components. The data show strong evidence for a Galaxy consisting of an oblate halo, a disk component, and a number of localized overdensities. The number density distribution of stars as traced by M dwarfs in the solar neighborhood (D < 2 kpc) is well fit by two exponential disks (the thin and thick disk) with scale heights and lengths, bias corrected for an assumed 35% binary fraction, of H1 = 300 pc and L1 = 2600 pc, and H2 = 900 pc and L2 = 3600 pc, and local thick-to-thin disk density normalization ρthick(R☉)/ρthin(R☉) = 12% . We use the stars near main-sequence turnoff to measure the shape of the Galactic halo. We find a strong preference for oblate halo models, with best-fit axis ratio c/a = 0.64, ρH ∝ r−2.8 power-law profile, and the local halo-to-thin disk normalization of 0.5%. Based on a series of Monte Carlo simulations, we estimate the errors of derived model parameters not to be larger than ~20% for the disk scales and ~10% for the density normalization, with largest contributions to error coming from the uncertainty in calibration of the photometric parallax relation and poorly constrained binary fraction. While generally consistent with the above model, the measured density distribution shows a number of statistically significant localized deviations. In addition to known features, such as the Monoceros stream, we detect two overdensities in the thick disk region at cylindrical galactocentric radii and heights (R,Z) ~ (6.5,1.5) kpc and (R,Z) ~ (9.5,0.8) kpc and a remarkable density enhancement in the halo covering over 1000 deg2 of sky toward the constellation of Virgo, at distances of ~6-20 kpc. Compared to counts in a region symmetric with respect to the l = 0° line and with the same Galactic latitude, the Virgo overdensity is responsible for a factor of 2 number density excess and may be a nearby tidal stream or a low-surface brightness dwarf galaxy merging with the Milky Way. The u − g color distribution of stars associated with it implies metallicity lower than that of thick disk stars and consistent with the halo metallicity distribution. After removal of the resolved overdensities, the remaining data are consistent with a smooth density distribution; we detect no evidence of further unresolved clumpy substructure at scales ranging from ~50 pc in the disk to ~1-2 kpc in the halo.

Journal ArticleDOI
TL;DR: This work incorporates several different evidence sources into the gene finder AUGUSTUS, a widely used and essential tool for analyzing newly sequenced genomes and correctly predicts at least one splice form exactly correct in 57% of human genes.
Abstract: Motivation: Computational annotation of protein coding genes in genomic DNA is a widely used and essential tool for analyzing newly sequenced genomes. However, current methods suffer from inaccuracy and do poorly with certain types of genes. Including additional sources of evidence of the existence and structure of genes can improve the quality of gene predictions. For many eukaryotic genomes, expressed sequence tags (ESTs) are available as evidence for genes. Related genomes that have been sequenced, annotated, and aligned to the target genome provide evidence of existence and structure of genes. Results: We incorporate several different evidence sources into the gene finder AUGUSTUS. The sources of evidence are gene and transcript annotations from related species syntenically mapped to the target genome using TransMap, evolutionary conservation of DNA, mRNA and ESTs of the target species, and retroposed genes. The predictions include alternative splice variants where evidence supports it. Using only ESTs we were able to correctly predict at least one splice form exactly correct in 57% of human genes. Also using evidence from other species and human mRNAs, this number rises to 77%. Syntenic mapping is well-suited to annotate genomes closely related to genomes that are already annotated or for which extensive transcript evidence is available. Native cDNA evidence is most helpful when the alignments are used as compound information rather than independent positionwise information. Availability: AUGUSTUS is open source and available at http://augustus.gobics.de. The gene predictions for human can be browsed and downloaded at the UCSC Genome Browser (http://genome.ucsc.edu) Contact: mstanke@gwdg.de Supplementary information: Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the current CO2 level can be reduced to at most 350 ppm by phasing out coal use except where CO2 is captured and adopting agricultural and forestry practices that sequester carbon.
Abstract: Paleoclimate data show that climate sensitivity is ~3 deg-C for doubled CO2, including only fast feedback processes. Equilibrium sensitivity, including slower surface albedo feedbacks, is ~6 deg-C for doubled CO2 for the range of climate states between glacial conditions and ice-free Antarctica. Decreasing CO2 was the main cause of a cooling trend that began 50 million years ago, large scale glaciation occurring when CO2 fell to 450 +/- 100 ppm, a level that will be exceeded within decades, barring prompt policy changes. If humanity wishes to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced from its current 385 ppm to at most 350 ppm. The largest uncertainty in the target arises from possible changes of non-CO2 forcings. An initial 350 ppm CO2 target may be achievable by phasing out coal use except where CO2 is captured and adopting agricultural and forestry practices that sequester carbon. If the present overshoot of this target CO2 is not brief, there is a possibility of seeding irreversible catastrophic effects.


Journal ArticleDOI
07 Aug 2008-Nature
TL;DR: A simulation that resolves dark matter substructure even in the very inner regions of the Galactic halo is reported, finding hundreds of very concentrated dark matter clumps surviving near the solar circle, as well as numerous cold streams.
Abstract: The standard cosmological model includes the presumption that cold dark matter plays a major part in large-scale mass distribution in the Universe from the Big Bang to the present. Though successful in explaining large-scale events, until now simulations of galaxy formation using the cold dark matter model have failed to resolve certain smaller-scale structures. Now Diemand et al. have simulated the assembly of the dark matter 'halo' of the Milky Way at much better resolution than has been possible previously. The model then produces thousands of clumps surviving within the inner halo, some of them in the vicinity of the Solar System. In cold dark matter cosmological models, structures form and grow by merging of smaller units, previous simulations have shown that such merging is incomplete as the inner cores of halos survive and orbit as 'subhalos' within their hosts. This paper reports a simulation that resolves such substructure in the very inner regions of the Galactic halo. Hundreds of very concentrated dark matter clumps survive near the solar circle, as well as numerous cold streams. In cold dark matter cosmological models1,2, structures form and grow through the merging of smaller units3. Numerical simulations have shown that such merging is incomplete; the inner cores of haloes survive and orbit as ‘subhaloes’ within their hosts4,5. Here we report a simulation that resolves such substructure even in the very inner regions of the Galactic halo. We find hundreds of very concentrated dark matter clumps surviving near the solar circle, as well as numerous cold streams. The simulation also reveals the fractal nature of dark matter clustering: isolated haloes and subhaloes contain the same relative amount of substructure and both have cusped inner density profiles. The inner mass and phase-space densities of subhaloes match those of recently discovered faint, dark-matter-dominated dwarf satellite galaxies6,7,8, and the overall amount of substructure can explain the anomalous flux ratios seen in strong gravitational lenses9,10. Subhaloes boost γ-ray production from dark matter annihilation by factors of 4 to 15 relative to smooth galactic models. Local cosmic ray production is also enhanced, typically by a factor of 1.4 but by a factor of more than 10 in one per cent of locations lying sufficiently close to a large subhalo. (These estimates assume that the gravitational effects of baryons on dark matter substructure are small.)

Journal ArticleDOI
TL;DR: In this article, the authors analyzed 8 years of precise radial velocity measurements from the Keck Planet Search, characterizing the detection threshold, selection effects, and completeness of the survey.
Abstract: . We analyze 8 years of precise radial velocity measurements from the Keck Planet Search, characterizing the detection threshold, selection effects, and completeness of the survey. We first carry out a systematic search for planets, by assessing the false-alarm probability associated with Keplerian orbit fits to the data. This allows us to understand the detection threshold for each star in terms of the number and time baseline of the observations, and the underlying “noise” from measurement errors, intrinsic stellar jitter, or additional low-mass planets. We show that all planets with orbital periods P 20 m s-1 K > 20 m s - 1 , and eccentricities e ≲ 0.6 e ≲ 0.6 have been announced, and we summarize the candidates at lower amplitudes and longer orbital periods. For the remaining stars, we calculate upper limits on the velocity amplitude of a companion. For orbital periods less than the duration of the observations, these are typically ...

Journal ArticleDOI
TL;DR: In this paper, the authors determined the sizes of these quiescent galaxies using deep, high-resolution images obtained with HST/NIC2 and laser guide star (LGS) assisted Keck/adaptive optics (AO).
Abstract: Using deep near-infrared spectroscopy, Kriek et al. found that ∼45% of massive galaxies at have evolved z ∼ 2.3 stellar populations and little or no ongoing star formation. Here we determine the sizes of these quiescent galaxies using deep, high-resolution images obtained with HST/NIC2 and laser guide star (LGS)–assisted Keck/adaptive optics (AO). Considering that their median stellar mass is , the galaxies are remarkably small, with 11 1.7 # 10 M, a median effective radius kpc. Galaxies of similar mass in the nearby universe have sizes of ≈5 kpc and r p 0.9 e average stellar densities that are 2 orders of magnitude lower than the galaxies. These results extend earlier z ∼ 2.3 work at and confirm previous studies at that lacked spectroscopic redshifts and imaging of sufficient z ∼ 1.5 z 1 2 resolution to resolve the galaxies. Our findings demonstrate that fully assembled early-type galaxies make up at most ∼10% of the population of K-selected quiescent galaxies at , effectively ruling out simple monolithic z ∼ 2.3 models for their formation. The galaxies must evolve significantly after , through dry mergers or other z ∼ 2.3 processes, consistent with predictions from hierarchical models. Subject headings: cosmology: observations — galaxies: evolution — galaxies: formation

Journal ArticleDOI
TL;DR: In this article, the authors derived new constraints on the mass of the Milky Way's dark matter halo, based on 2401 rigorously selected blue horizontal-branch halo stars from SDSS DR6.
Abstract: We derive new constraints on the mass of the Milky Way's dark matter halo, based on 2401 rigorously selected blue horizontal-branch halo stars from SDSS DR6. This sample enables construction of the full line-of-sight velocity distribution at different galactocentric radii. To interpret these distributions, we compare them to matched mock observations drawn from two different cosmological galaxy formation simulations designed to resemble the Milky Way. This procedure results in an estimate of the Milky Way's circular velocity curve to ~60 kpc, which is found to be slightly falling from the adopted value of 220 km s?1 at the Sun's location, and implies -->M( Vcir(r) , derived in statistically independent bins, is found to be consistent with the expectations from an NFW dark matter halo with the established stellar mass components at its center. If we assume that an NFW halo profile of characteristic concentration holds, we can use the observations to estimate the virial mass of the Milky Way's dark matter halo, -->Mvir = 1.0+ 0.3?0.2 ? 1012 M?, which is lower than many previous estimates. We have checked that the particulars of the cosmological simulations are unlikely to introduce systematics larger than the statistical uncertainties. This estimate implies that nearly 40% of the baryons within the virial radius of the Milky Way's dark matter halo reside in the stellar components of our Galaxy. A value for -->Mvir of only ~ -->1 ? 1012 M? also (re)opens the question of whether all of the Milky Way's satellite galaxies are on bound orbits.

Journal ArticleDOI
TL;DR: In this paper, the effects of changes in the cosmological parameters between the Wilkinson Microwave Anisotropy Probe (WMAP) 1st, 3rd and 5th year results on the structure of dark matter haloes were investigated.
Abstract: We investigate the effects of changes in the cosmological parameters between the Wilkinson Microwave Anisotropy Probe (WMAP) 1st, 3rd and 5th year results on the structure of dark matter haloes. We use a set of simulations that cover five decades in halo mass ranging from the scales of dwarf galaxies (V c ≈ 30 km s -1 ) to clusters of galaxies (V c ≈ 1000 km s -1 ). We find that the concentration mass relation is a power law in all three cosmologies. However, the slope is shallower and the zero-point is lower moving from WMAP1 to WMAP5 to WMAP3. For haloes of mass log M 200 /[h -1 M ⊙ ] = 10, 12 and 14 the differences in the concentration parameter between WMAP1 and WMAP3 are a factor of 1.55,1.41 and 1.29, respectively. As we show, this brings the central densities of dark matter haloes in good agreement with the central densities of dwarf and low surface brightness galaxies inferred from their rotation curves, for both the WMAP3 and WMAP5 cosmologies. We also show that none of the existing toy models for the concentration-mass relation can reproduce our simulation results over the entire range of masses probed. In particular, the model of Bullock et al. fails at the higher mass end (M ≥ 10 13 h-1 M ⊙ ), while the NFW model of Navarro, Frenk and White fails dramatically at the low-mass end (M ≤ 10 12 h -1 M ⊙ ). We present a new model, based on a simple modification of that of Bullock et al., which reproduces the concentration-mass relations in our simulations over the entire range of masses probed (10 10 ≤ M ≤ 10 15 h -1 M ⊙ ). Haloes in the WMAP3 cosmology (at a fixed mass) are more flatted compared to the WMAP1 cosmology, with a medium to long axis ration reduced by ≈ 0 per cent. Finally, we show that the distribution of halo spin parameters is the same for all three cosmologies.

Journal ArticleDOI
TL;DR: In this article, the authors present the MARINE MAMMAL NOISE-EXPOSURE CRITERIA: INITIAL SCIENTIFIC RECOMMENDATIONS.
Abstract: (2008). MARINE MAMMAL NOISE-EXPOSURE CRITERIA: INITIAL SCIENTIFIC RECOMMENDATIONS. Bioacoustics: Vol. 17, No. 1-3, pp. 273-275.

Journal ArticleDOI
TL;DR: This work reports the first multi-quantum-well (MQW) core/shell nanowire heterostructures based on well-defined III-nitride materials that enable lasing over a broad range of wavelengths at room temperature and demonstrates a new level of complexity in nanowires, which potentially can yield free-standing injection nanolasers.
Abstract: Rational design and synthesis of nanowires with increasingly complex structures can yield enhanced and/or novel electronic and photonic functions. For example, Ge/Si core/shell nanowires have exhibited substantially higher performance as field-effect transistors and low-temperature quantum devices compared with homogeneous materials, and nano-roughened Si nanowires were recently shown to have an unusually high thermoelectric figure of merit. Here, we report the first multi-quantum-well (MQW) core/shell nanowire heterostructures based on well-defined III-nitride materials that enable lasing over a broad range of wavelengths at room temperature. Transmission electron microscopy studies show that the triangular GaN nanowire cores enable epitaxial and dislocation-free growth of highly uniform (InGaN/GaN)n quantum wells with n=3, 13 and 26 and InGaN well thicknesses of 1-3 nm. Optical excitation of individual MQW nanowire structures yielded lasing with InGaN quantum-well composition-dependent emission from 365 to 494 nm, and threshold dependent on quantum well number, n. Our work demonstrates a new level of complexity in nanowire structures, which potentially can yield free-standing injection nanolasers.

Journal ArticleDOI
TL;DR: Geologic and geophysical data from north-central Tibet are presented, including magnetostratigraphy, sedimentology, paleocurrent measurements, and 40Ar/39Ar and fission-track studies, to show that the central plateau was elevated by 40 Ma ago.
Abstract: The surface uplift history of the Tibetan Plateau and Himalaya is among the most interesting topics in geosciences because of its effect on regional and global climate during Cenozoic time, its influence on monsoon intensity, and its reflection of the dynamics of continental plateaus. Models of plateau growth vary in time, from pre-India-Asia collision (e.g., ≈100 Ma ago) to gradual uplift after the India-Asia collision (e.g., ≈55 Ma ago) and to more recent abrupt uplift (<7 Ma ago), and vary in space, from northward stepwise growth of topography to simultaneous surface uplift across the plateau. Here, we improve that understanding by presenting geologic and geophysical data from north-central Tibet, including magnetostratigraphy, sedimentology, paleocurrent measurements, and 40Ar/39Ar and fission-track studies, to show that the central plateau was elevated by 40 Ma ago. Regions south and north of the central plateau gained elevation significantly later. During Eocene time, the northern boundary of the protoplateau was in the region of the Tanggula Shan. Elevation gain started in pre-Eocene time in the Lhasa and Qiangtang terranes and expanded throughout the Neogene toward its present southern and northern margins in the Himalaya and Qilian Shan.

Journal ArticleDOI
04 Jul 2008-Science
TL;DR: In this paper, a new data set of fossil occurrences representing 3.5 million specimens was presented, and it was shown that global and local diversity was less than twice as high in the Neogene as in the mid-Paleozoic.
Abstract: It has previously been thought that there was a steep Cretaceous and Cenozoic radiation of marine invertebrates. This pattern can be replicated with a new data set of fossil occurrences representing 3.5 million specimens, but only when older analytical protocols are used. Moreover, analyses that employ sampling standardization and more robust counting methods show a modest rise in diversity with no clear trend after the mid-Cretaceous. Globally, locally, and at both high and low latitudes, diversity was less than twice as high in the Neogene as in the mid-Paleozoic. The ratio of global to local richness has changed little, and a latitudinal diversity gradient was present in the early Paleozoic.

Journal ArticleDOI
TL;DR: In this article, a polynomial model for estimating the metallicity and rotational velocity distributions of SDSS spectra was proposed, which is similar to random and systematic uncertainties in spectroscopic determinations.
Abstract: In addition to optical photometry of unprecedented quality, the Sloan Digital Sky Survey (SDSS) is producing a massive spectroscopic database which already contains over 280,000 stellar spectra. Using eectiv e temperature and metallicity derived from SDSS spectra for 60,000 F and G type main sequence stars (0:2 < g r < 0:6), we develop polynomial models, reminiscent of traditional methods based on the UBV photometry, for estimating these parameters from the SDSS u g and g r colors. These estimators reproduce SDSS spectroscopic parameters with a root-mean-square scatter of 100 K for eectiv e temperature, and 0.2 dex for metallicity (limited by photometric errors), which are similar to random and systematic uncertainties in spectroscopic determinations. We apply this method to a photometric catalog of coadded SDSS observations and study the photometric metallicity distribution of 200,000 F and G type stars observed in 300 deg 2 of high Galactic latitude sky. These deeper (g < 20:5) and photometrically precise ( 0.01 mag) coadded data enable an accurate measurement of the unbiased metallicity distribution for a complete volume-limited sample of stars at distances between 500 pc and 8 kpc. The metallicity distribution can be exquisitely modeled using two components with a spatially varying number ratio, that correspond to disk and halo. The best-t number ratio of the two components is consistent with that implied by the decomposition of stellar counts proles into exponential disk and power-law halo components by Juri c et al. (2008). The two components also possess the kinematics expected for disk and halo stars. The metallicity of the halo component can be modeled as a spatially invariant Gaussian distribution with a mean of [F e=H] = 1:46 and a standard deviation of 0.3 dex. The disk metallicity distribution is non-Gaussian, with a remarkably small scatter (rms 0.16 dex) and the median smoothly decreasing with distance from the plane from 0:6 at 500 pc to 0:8 beyond several kpc. Similarly, we nd using proper motion measurements that a nonGaussian rotational velocity distribution of disk stars shifts by 50 km/s as the distance from the plane increases from 500 pc to several kpc. Despite this similarity, the metallicity and rotational velocity distributions of disk stars are not correlated (Kendall’s = 0:017 0:018). This absence of a correlation between metallicity and kinematics for disk stars is in a conict with the traditional decomposition in terms of thin and thick disks, which predicts a strong correlation ( = 0:30 0:04) at 1 kpc from the mid-plane. Instead, the variation of the metallicity and rotational velocity distributions can be modeled using non-Gaussian functions that retain their shapes and only shift as the distance from the mid-plane increases. We also study the metallicity distribution using a shallower (g < 19:5) but much larger sample of close to three million stars in 8500 sq. deg. of sky included in SDSS Data Release 6. The large sky coverage enables the detection of coherent substructures in the kinematics{ metallicity space, such as the Monoceros stream, which rotates faster than the LSR, and has a median metallicity of [F e=H] = 0:95, with an rms scatter of only 0.15 dex. We extrapolate our results to the performance expected from the Large Synoptic Survey Telescope (LSST) and estimate that LSST will obtain metallicity measurements accurate to 0.2 dex or better, with proper motion measurements accurate to 0.2-0.5 mas/yr, for about 200 million F/G dwarf stars within a distance limit of 100 kpc (g < 23:5). Subject headings: methods: data analysis | stars: statistics | Galaxy: halo, kinematics and dynamics, stellar content, structure

Journal ArticleDOI
TL;DR: A description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.
Abstract: This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.

Journal ArticleDOI
TL;DR: This work examines how concepts pertaining to the assembly of plant communities can be used to strengthen resistance to invasion in restored communities.
Abstract: One of the greatest challenges for ecological restoration is to create or reassemble plant communities that are resistant to invasion by exotic species. We examine how concepts pertaining to the assembly of plant communities can be used to strengthen resistance to invasion in restored communities. Community ecology theory predicts that an invasive species will be unlikely to establish if there is a species with similar traits present in the resident community or if available niches are filled. Therefore, successful restoration efforts should select native species with traits similar to likely invaders and include a diversity of functional traits. The success of trait-based approaches to restoration will depend largely on the diversity of invaders, on the strength of environmental factors and on dispersal dynamics of invasive and native species.

Posted ContentDOI
TL;DR: The authors investigated the role of human capital, especially through prior work experience, and financial capital, in contributing to why female-owned businesses have lower survival rates, profits, employment and sales.
Abstract: Using confidential microdata from the U.S. Census Bureau, we investigate the performance of female-owned businesses making comparisons to male-owned businesses. Using regression estimates and a decomposition technique, we explore the role that human capital, especially through prior work experience, and financial capital play in contributing to why female-owned businesses have lower survival rates, profits, employment and sales. We find that female-owned businesses are less successful than male-owned businesses because they have less startup capital, and business human capital acquired through prior work experience in a similar business and prior work experience in family business. We also find some evidence that female-owned businesses work fewer hours and may have different preferences for the goals of their business.

Journal ArticleDOI
TL;DR: This approach to the study of identity challenges personality and social psychologists to consider a cultural psychology framework that focuses on the relationship between master narratives and personal narratives of identity, recognizes the value of a developmental perspective, and uses ethnographic and idiographic methods.
Abstract: This article presents a tripartite model of identity that integrates cognitive, social, and cultural levels of analysis in a multimethod framework. With a focus on content, structure, and process, identity is defined as ideology cognized through the individual engagement with discourse, made manifest in a personal narrative constructed and reconstructed across the life course, and scripted in and through social interaction and social practice. This approach to the study of identity challenges personality and social psychologists to consider a cultural psychology framework that focuses on the relationship between master narratives and personal narratives of identity, recognizes the value of a developmental perspective, and uses ethnographic and idiographic methods. Research in personality and social psychology that either explicitly or implicitly relies on the model is reviewed.

Journal ArticleDOI
TL;DR: The Sloan Extension for Galactic Exploration and Understanding (SEGUE) Stellar Parameter Pipeline (SSPP) as discussed by the authors is a stellar atmospheric parameters pipeline for AFGK-type stars.
Abstract: We describe the development and implementation of the Sloan Extension for Galactic Exploration and Understanding (SEGUE) Stellar Parameter Pipeline (SSPP) The SSPP is derived, using multiple techniques, radial velocities, and the fundamental stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) for AFGK-type stars, based on medium-resolution spectroscopy and ugriz photometry obtained during the course of the original Sloan Digital Sky Survey (SDSS-I) and its Galactic extension (SDSS-II/SEGUE) The SSPP also provides spectral classification for a much wider range of stars, including stars with temperatures outside the window where atmospheric parameters can be estimated with the current approaches This is Paper I in a series of papers on the SSPP; it provides an overview of the SSPP, and tests of its performance using several external data sets Random and systematic errors are critically examined for the current version of the SSPP, which has been used for the sixth public data release of the SDSS (DR-6)

Book ChapterDOI
29 Mar 2008
TL;DR: Turn-based stochastic games on infinite graphs induced by game probabilistic lossy channel systems (GPLCS) are decidable, which generalizes the decidability result for PLCS-induced Markov decision processes in [10].
Abstract: We consider turn-based stochastic games on infinite graphs induced by game probabilistic lossy channel systems (GPLCS), the game version of probabilistic lossy channel systems (PLCS). We study games with Buchi (repeated reachability) objectives and almost-sure winning conditions. These games are pure memoryless determined and, under the assumption that the target set is regular, a symbolic representation of the set of winning states for each player can be effectively constructed. Thus, turn-based stochastic games on GPLCS are decidable. This generalizes the decidability result for PLCS-induced Markov decision processes in [10].