scispace - formally typeset
Search or ask a question
Institution

University of California, Santa Cruz

EducationSanta Cruz, California, United States
About: University of California, Santa Cruz is a education organization based out in Santa Cruz, California, United States. It is known for research contribution in the topics: Galaxy & Population. The organization has 15541 authors who have published 44120 publications receiving 2759983 citations. The organization is also known as: UCSC & UC, Santa Cruz.
Topics: Galaxy, Population, Stars, Redshift, Star formation


Papers
More filters
Journal ArticleDOI
Jennifer K. Adelman-McCarthy1, Marcel A. Agüeros2, S. Allam3, S. Allam1  +163 moreInstitutions (54)
TL;DR: The Fifth Data Release (DR5) of the Sloan Digital Sky Survey (SDSS) was released in 2005 June and represents the completion of the SDSS-I project as mentioned in this paper, which includes five-band photometric data for 217 million objects selected over 8000 deg 2 and 1,048,960 spectra of galaxies, quasars, and stars selected from 5713 deg 2 of imaging data.
Abstract: This paper describes the Fifth Data Release (DR5) of the Sloan Digital Sky Survey (SDSS). DR5 includes all survey quality data taken through 2005 June and represents the completion of the SDSS-I project (whose successor, SDSS-II, will continue through mid-2008). It includes five-band photometric data for 217 million objects selected over 8000 deg^2 and 1,048,960 spectra of galaxies, quasars, and stars selected from 5713 deg^2 of that imaging data. These numbers represent a roughly 20% increment over those of the Fourth Data Release; all the data from previous data releases are included in the present release. In addition to "standard" SDSS observations, DR5 includes repeat scans of the southern equatorial stripe, imaging scans across M31 and the core of the Perseus Cluster of galaxies, and the first spectroscopic data from SEGUE, a survey to explore the kinematics and chemical evolution of the Galaxy. The catalog database incorporates several new features, including photometric redshifts of galaxies, tables of matched objects in overlap regions of the imaging survey, and tools that allow precise computations of survey geometry for statistical investigations.

811 citations

Journal ArticleDOI
TL;DR: In this article, surface-brightness profiles of 61 elliptical galaxies and spiral bulges (hot galaxies) were analyzed using the Hubble Space Telescope surface brightness data and they showed that the centers of these galaxies are up to 1000 times denser in mass and luminosity than the cores of large galaxies at a limiting radius of 10 pc.
Abstract: We analyze Hubble Space Telescope surface-brightness profiles of 61 elliptical galaxies and spiral bulges (hot galaxies). Luminous hot galaxies have cuspy cores with steep outer power-law profiles that break at r ~ r_b to shallow inner profiles with logslope less than 0.3. Faint hot galaxies show steep, largely featureless power-law profiles at all radii and lack cores. The centers of power-law galaxies are up to 1000 times denser in mass and luminosity than the cores of large galaxies at a limiting radius of 10 pc. At intermediate magnitudes (-22.0 < M_V < -20.5), core and power-law galaxies coexist, and there is a range in r_b at a given luminosity of at least two orders of magnitude. Central properties correlate with global rotation and shape: core galaxies tend to be boxy and slowly rotating, whereas power-law galaxies tend to be disky and rapidly rotating. The dense power-law centers of disky, rotating galaxies are consistent with their formation in gas-rich mergers. The parallel proposition that cores are simply the by-products of gas-free stellar mergers is less compelling. For example, core galaxies accrete small, dense, gas-free galaxies at a rate sufficient to fill in low-density cores if the satellites survived and sank to the center. An alternative model for core formation involves the orbital decay of massive black holes (BHs): the BH may heat and eject stars from the center, eroding a power law if any exists and scouring out a core. An average BH mass per spheroid of 0.002 times the stellar mass yields reasonably good agreement with the masses and radii of observed cores and in addition is consistent with the energetics of AGNs and kinematic detections of BHs in nearby galaxies.

810 citations

Journal ArticleDOI
TL;DR: Searches from many uniform seismometers in a well-defined, closely spaced configuration produce high-quality and homogeneous data sets, which can be used to study the Earth's structure in great detail.
Abstract: [1] Since their development in the 1960s, seismic arrays have given a new impulse to seismology. Recordings from many uniform seismometers in a well-defined, closely spaced configuration produce high-quality and homogeneous data sets, which can be used to study the Earth's structure in great detail. Apart from an improvement of the signal-to-noise ratio due to the simple summation of the individual array recordings, seismological arrays can be used in many different ways to study the fine-scale structure of the Earth's interior. They have helped to study such different structures as the interior of volcanos, continental crust and lithosphere, global variations of seismic velocities in the mantle, the core-mantle boundary and the structure of the inner core. For this purpose many different, specialized array techniques have been developed and applied to an increasing number of high-quality array data sets. Most array methods use the ability of seismic arrays to measure the vector velocity of an incident wave front, i.e., slowness and back azimuth. This information can be used to distinguish between different seismic phases, separate waves from different seismic events and improve the signal-to-noise ratio by stacking with respect to the varying slowness of different phases. The vector velocity information of scattered or reflected phases can be used to determine the region of the Earth from whence the seismic energy comes and with what structures it interacted. Therefore seismic arrays are perfectly suited to study the small-scale structure and variations of the material properties of the Earth. In this review we will give an introduction to various array techniques which have been developed since the 1960s. For each of these array techniques we give the basic mathematical equations and show examples of applications. The advantages and disadvantages and the appropriate applications and restrictions of the techniques will also be discussed. The main methods discussed are the beam-forming method, which forms the basis for several other methods, different slant stacking techniques, and frequency–wave number analysis. Finally, some methods used in exploration geophysics that have been adopted for global seismology are introduced. This is followed by a description of temporary and permanent arrays installed in the past, as well as existing arrays and seismic networks. We highlight their purposes and discuss briefly the advantages and disadvantages of different array configurations.

809 citations

Journal ArticleDOI
TL;DR: The Modules for Experiments in Stellar Astrophysics (MESA) software instrument as discussed by the authors has been updated with the capability to handle floating point exceptions and stellar model optimization, as well as four new software tools.
Abstract: We update the capabilities of the software instrument Modules for Experiments in Stellar Astrophysics (MESA) and enhance its ease of use and availability. Our new approach to locating convective boundaries is consistent with the physics of convection, and yields reliable values of the convective-core mass during both hydrogen- and helium-burning phases. Stars with become white dwarfs and cool to the point where the electrons are degenerate and the ions are strongly coupled, a realm now available to study with MESA due to improved treatments of element diffusion, latent heat release, and blending of equations of state. Studies of the final fates of massive stars are extended in MESA by our addition of an approximate Riemann solver that captures shocks and conserves energy to high accuracy during dynamic epochs. We also introduce a 1D capability for modeling the effects of Rayleigh–Taylor instabilities that, in combination with the coupling to a public version of the radiation transfer instrument, creates new avenues for exploring Type II supernova properties. These capabilities are exhibited with exploratory models of pair-instability supernovae, pulsational pair-instability supernovae, and the formation of stellar-mass black holes. The applicability of MESA is now widened by the capability to import multidimensional hydrodynamic models into MESA. We close by introducing software modules for handling floating point exceptions and stellar model optimization, as well as four new software tools— , -Docker, , and mesastar.org—to enhance MESA's education and research impact.

808 citations

Proceedings ArticleDOI
23 Oct 1995
TL;DR: A solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs is given.
Abstract: In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T/sup -1/3/), and we give an improved rate of convergence when the best arm has fairly low payoff. We also consider a setting in which the player has a team of "experts" advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.

807 citations


Authors

Showing all 15733 results

NameH-indexPapersCitations
David J. Schlegel193600193972
David R. Williams1782034138789
John R. Yates1771036129029
David Haussler172488224960
Evan E. Eichler170567150409
Anton M. Koekemoer1681127106796
Mark Gerstein168751149578
Alexander S. Szalay166936145745
Charles M. Lieber165521132811
Jorge E. Cortes1632784124154
M. Razzano155515106357
Lars Hernquist14859888554
Aaron Dominguez1471968113224
Taeghwan Hyeon13956375814
Garth D. Illingworth13750561793
Network Information
Related Institutions (5)
University of California, Berkeley
265.6K papers, 16.8M citations

94% related

Massachusetts Institute of Technology
268K papers, 18.2M citations

93% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

92% related

Max Planck Society
406.2K papers, 19.5M citations

92% related

Stanford University
320.3K papers, 21.8M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202351
2022328
20212,157
20202,353
20192,209
20182,157