LSST: from Science Drivers to Reference Design and Anticipated Data Products
Summary (12 min read)
1. Introduction
- Major advances in their understanding of the universe have historically arisen from dramatic improvements in their ability to "see.".
- With all of their existing telescope facilities, the authors have still surveyed only a small fraction of the observable universe (except when considering the most luminous quasars).
- The 2010 report "New Worlds, New Horizons in Astronomy and Astrophysics" by the NRC Committee for a Decadal Survey of Astronomy and Astrophysics (National Research Council 2010) ranked LSST as its top priority for large ground-based projects, and in 2014 May the National Science Board approved the project for construction.
- The community involvement is discussed in Section 5, and broad educational and societal impacts of the project are discussed in Section 6.
2. From Science Drivers to Reference Design
- The most important characteristic that determines the speed at which a system can survey a given sky area to a given flux limit (i.e., its depth) is its étendue (or grasp), the product of its primary mirror area and the angular area of its field of view (for a given set of observing conditions, such as seeing and sky brightness).
- Guided by the community-wide input assembled in the report of the Science Working Group of the LSST in 2004 (Science Working Group of the LSST & Strauss 2004 ), the LSST is designed to achieve goals set by four main science themes:.
- Each of these four themes itself encompasses a variety of analyses, with varying sensitivity to instrumental and system parameters.
- These themes fully exercise the technical capabilities of the system, such as photometric and astrometric accuracy and image quality.
- About 90% of the observing time will be devoted to a deep-wide-fast (main) survey mode.
2.1. The Main Science Drivers
- The main science drivers are used to optimize various system parameters.
- Ultimately, in this high-dimensional parameter space, there is a manifold defined by the total project cost.
- Here the authors summarize the dozen or so most important interlocking constraints on data and system properties placed by the four main science themes: 11.
- Parameters characterizing data processing and data access (such as the maximum time allowed after each exposure to report transient sources, and the maximum allowed software contribution to measurement errors).
2.1.1. Probing Dark Energy and Dark Matter
- Current models of cosmology require the existence of both dark matter and dark energy to match observational constraints (Riess et al.
- Distinguishing competing models for the physical nature of dark energy, or alternative explanations involving modifications of the general theory of relativity, will require percent-level measurements of both the cosmic expansion and the growth of dark matter structure as a function of redshift.
- These investigations require deep wide-area multicolor imaging with stringent requirements on shear systematics in at least two bands, and excellent photometry in at least five bands to measure photometric redshifts (a requirement shared with LSS, and indeed all extragalactic science drivers).
- Rather than simply co-adding all images in a given region of sky, the individual exposures, each with their own PSF and noise characteristics, should be analyzed simultaneously to optimally measure the shapes of galaxies (Tyson et al. 2008; Jee & Tyson 2011) .
2.1.2. Taking an Inventory of the Solar System
- The small-body populations in the solar system, such as asteroids, trans-Neptunian objects (TNOs), and comets, are remnants of its early assembly.
- The history of accretion, collisional grinding, and perturbation by existing and vanished giant planets is preserved in the orbital elements and size distributions of those objects.
- Individual exposures should be shorter than about 30 s to minimize the effects of trailing for the majority of moving objects.
- The images must be well sampled to enable accurate astrometry, with absolute accuracy of at least 0 1 in order to measure orbital parameters of TNOs with enough precision to constrain theoretical models and enable prediction of occultations.
2.1.3. Exploring the Transient Optical Sky
- Recent surveys have shown the power of measuring variability of celestial sources for studying gravitational lensing, searching for SNe, determining the physical properties of gamma-ray burst sources, discovering gravitational wave counterparts, probing the structure of active galactic nuclei (AGNs), studying variable star populations, discovering exoplanets, and many other subjects at the forefront of astrophysics (SciBook, Chap.
- Such a survey would likely detect microlensing by stars and compact objects in the Milky Way, but also in the Local Group and perhaps beyond (de Jong et al. 2008) .
- Time series ranging between 1-minute and 10 yr cadence should be probed over a significant fraction of the sky.
- Observations over a decade will enable the study of long-period variables, intermediate-mass black holes, and quasars (Kaspi et al.
- The next frontier in this field will require measuring the colors of fast transients and probing variability at faint magnitudes.
2.1.4. Mapping the Milky Way
- A major challenge in extragalactic cosmology today concerns the formation of structure on subgalactic scales, where baryon physics becomes important, and the nature of dark matter may manifest itself in observable ways (e.g., Weinberg et al. 2015) .
- The Milky Way and its environment provide a unique data set for understanding the detailed processes that shape galaxy formation and for testing the smallscale predictions of their standard cosmological model.
- In order to probe the halo out to its presumed edge at ∼100 kpc (Ivezić et al. 2004 ) with main-sequence stars, the total co-added depth must reach r>27, with a similar depth in the g band.
- This value was also chosen to approximately match the accuracy anticipated for the Gaia mission 89 (Perryman et al. 2001 ; de Bruijne 2012) at its faint limit (r∼20).
- To achieve the required proper-motion and parallax accuracy with an assumed astrometric accuracy of 10 mas per observation per coordinate, approximately 1000 separate observations are required.
2.1.5. A Summary and Synthesis of Science-driven Constraints on Data Properties
- The goals of all the science programs discussed above (and many more, of course) can be accomplished by satisfying the minimal constraints listed below.
- Astrometric precision should maintain the systematic limit set by the atmosphere, of about 10 mas per visit at the bright end (on scales below 20 arcmin).
- As described above, the authors are planning to split each visit into two exposures.
- The distribution of visits per filter should enable accurate photometric redshifts, separation of stellar populations, and sufficient depth to enable detection of faint extremely red sources (e.g., brown dwarfs and high-redshift quasars).
- As a result, the LSST system can adopt a highly efficient survey strategy in which a single data set serves most science programs (instead of science-specific surveys executed in series).
2.2.1. The Aperture Size
- The product of the system's étendue and the survey lifetime, for given observing conditions, determines the sky area that can be surveyed to a given depth.
- A larger field of view would lead to unacceptable deterioration of the image quality.
- 90 For this reason, the sky area for the main survey is maximized to its practical limit, 18,000 deg 2 , determined by the requirement to avoid air masses less than 1.5, which would substantially deteriorate the image quality and the survey depth (see Equation (6)).
- The two requirements are compatible if the number of visits is several hundred per band, which is in good agreement with independent science-driven requirements on the latter.
- The required co-added survey depth provides a direct constraint, independent of the details of survey execution such as the exposure time per visit, on the minimum effective primary mirror diameter of 6.4 m, as illustrated in Figure 2 .
2.2.2. The Optimal Exposure Time
- The single-visit depth depends on both the primary mirror diameter and the chosen exposure time, t vis .
- Science drivers such as SN light curves and moving objects in the solar system require that n<4 days, or equivalently t vis <40 s for the nominal values of A sky and A FOV .
- With the adopted optical design, described below, this effective diameter corresponds to a geometrical diameter of ∼8 m.
- Apart from z<2 quasars, practically all populations have k at most 0.6 (the Euclidean value), and faint stars and galaxies have k<0.5.
2.3. System Design Trade-offs
- The authors note that the Pan-STARRS project (Kaiser et al. 2002 (Kaiser et al. , 2010)) , with similar science goals to LSST, envisions a distributed aperture design, where the total system étendue is a sum of étendue values for an array of small 1.8 m telescopes.
- Each of these clones would have to have its own 3-gigapixel camera (see below), and given the added risk and complexity (e.g., maintenance, data processing), the monolithic design seems advantageous for a system with such a large étendue as LSST.
- With an étendue about 6 times smaller than that of LSST (effective diameters of 6.4 and 3.0 m, and a field-of-view area of 9.6 deg 2 vs. 7.2 deg 2 ), and all observing conditions being equal, the PS4 system could in principle use a cadence identical to that of LSST.
- The distance limits for nearby sources, such as Milky Way stars, would drop to 60% of their corresponding LSST values, and the NEO completeness level mandated by the US Congress would not be reached.
2.4. The Filter Complement
- The authors have investigated the possibility of replacing the ugrizy system with a filter complement that includes only five filters.
- Each filter width could be increased by 20% over the same wavelength range (.
2.5. The Calibration Methods
- Precise determination of the PSF across each image, accurate photometric and astrometric calibration, and continuous monitoring of system performance and observing conditions will be needed to reach the full potential of the LSST mission.
- Extensive precursor data including the SDSS data set and their own data obtained using telescopes close to the LSST site of Cerro Pachón (e.g., the SOAR and Gemini South telescopes), as well as telescopes of similar aperture (e.g., Subaru), indicate that the photometric and astrometric accuracy will be limited not by their instrumentation or software, but rather by atmospheric effects.
- The dose of delivered photons is measured using a calibration photodiode whose quantum efficiency is known to high accuracy.
- Celestial spectrophotometric standard stars can be used as a separate means of photometric calibration, albeit only through the comparison of band-integrated fluxes with synthetic photometry calculations.
- Astrometric calibration will be based on the results from the Gaia mission (Lindegren et al. 2018 ), which will provide numerous high-accuracy astrometric standards in every LSST field.
2.6. The LSST Reference Design
- The authors briefly describe the reference design for the main LSST system components.
- Detailed discussion of the flow-down from science requirements to system design parameters and extensive system engineering analysis can be found in the LSST Science Book (Chaps. 2-3).
2.6.1. Telescope and Site
- The large LSST étendue is achieved in a novel three-mirror design (modified Paul-Baker Mersenne-Schmidt system; Angel et al. 2000) with a very fast f/1.234 beam.
- The optical design has been optimized to yield a large field of view (9.6 deg 2 ), with seeing-limited image quality, across a wide wavelength band (320-1050 nm).
- The primary-tertiary mirror cell was fabricated by CAID in Tucson and is undergoing acceptance tests.
- Hz, which is crucial for achieving the required fast slew-and-settle times.
- Furthermore, the summit support building has been oriented with respect to the prevailing winds to shed its turbulence away from the telescope enclosure.
2.6.2. Camera
- The LSST camera provides a 3.2-gigapixel flat focal plane array, tiled by 189 4K×4K CCD science sensors with 10 μm pixels .
- The sensors are deep depleted high-resistivity silicon back-illuminated devices with a highly segmented architecture that enables the entire array to be read in 2 s.
- The sixth optical filter can replace any of the five via a procedure accomplished during daylight hours.
- Each of the 21 rafts will host three front-end electronic boards (REB) operating in the cryostat (at −10°C) that read in parallel a total of 9×16 segments per CCD (144 video channels reading 1 million pixels each).
2.6.3. Data Management
- The rapid cadence and scale of the LSST observing program will produce approximately 15 TB per night of raw imaging data 95 (about 20 TB with calibration exposures).
- The detailed outputs of the LSST Data Management system are described in Section 3.3.
- Periodically process the accumulated survey data to provide a uniform photometric and astrometric calibration, measure the properties of all detected objects, and characterize objects based on their time-dependent behavior.
- Provide enough processing, storage, and network bandwidth to enable user analyses of the data without the need for petabyte-scale data transfers.
- The other half of the DR processing will be done at CC-IN2P3, which will also have the role of "long-term storage" facility.
2.6.4. The LSST Software Stack
- The LSST Software Stack is the data processing and analysis system developed by the LSST Project to enable LSST survey data reduction and delivery.
- The pipelines written for these surveys have demonstrated that it is possible to carry out largely autonomous data reduction of large data sets, automated detection of sources and objects, and the extraction of scientifically useful characteristics of those objects.
- The primary implementation language is Python and, where necessary for performance reasons, C++. 96 The LSST data management software has been prototyped for over 8 yr. Besides processing simulated LSST data (Section 2.7.3), it has been used to process images from CFHTLS (Cuillandre et al. 2012 ) and SDSS (Abazajian et al. 2009) .
- Achieving these goals requires that the source code is not only available but also appropriately documented at all levels.
2.7. Simulating the LSST System
- Typical users should not have to work directly with the C++ layer.
- A simulation framework provides such a capability, delivering a virtual prototype LSST against which design decisions, optimizations (including descoping), and trade studies can be evaluated (Connolly et al. 2014) .
- It comprises four primary components: a simulation of the survey scheduler (Section 2.7.1); databases of simulated astrophysical catalogs of stars, galaxies, quasars, and solar system objects (Section 2.7.2); a system for generating observations based on the pointing of the telescope; and a system for generating realistic LSST images of a given area of sky (Section 2.7.3).
- Computationally intensive routines are written in C/C++, with the overall framework and database interactions using Python.
2.7.1. The LSST Operations Simulator
- The LSST Operations Simulator (Delgado et al. 2014 ) was developed to enable a detailed quantitative analysis of the various science trade-offs described in this paper.
- Thus, the simulator correctly represents the variation of limiting magnitude between pairs of observations used to detect NEOs and the correlation between, for example, seasonal weather patterns and observing conditions at any given point on the sky.
- The time taken to move from one observation to the next is given by a detailed model of the camera, telescope, and dome.
- After a given exposure, all possible next observations are assigned a score that depends on their locations, times, and filters according to a set of scientific requirements that can vary with time and location.
- Results of the simulated surveys can be visualized and analyzed using a Python-based package called the Metrics Analysis Framework (MAF; Jones et al. 2014) .
2.7.2. Catalog Generation
- The simulated astronomical catalogs (CatSim; Connolly et al. 2014 ) are stored in an SQL database.
- Stellar sources are based on the Galactic structure models of Jurić et al. (2008) and include thin-disk, thick-disk, and halo star components.
- Half-light radii for bulges are estimated using the empirical absolute magnitude versus half-light radius relation given by González et al. (2009) .
- Positions for the 11 million asteroids in the simulation are stored within the base catalog (sampled once per night for the 10 yr duration of the LSST survey).
2.7.3. Image Simulations
- The framework described above provides a parameterized view of the sky above the atmosphere.
- Each photon is ray-traced through the atmosphere, telescope, and camera to generate a CCD image.
- All screens move during an exposure, with velocities derived from NOAA measurements of the wind velocities above the LSST site in Chile.
- The mirrors and lenses are simulated using geometric optics techniques in a fast ray-tracing algorithm, and all optical surfaces include a spectrum of perturbations based on design tolerances.
3. Anticipated Data Products and Their Characteristics
- The LSST observing strategy is designed to maximize the scientific throughput by minimizing slew and other downtime and by making appropriate choices of the filter bands given the realtime weather conditions.
- Using simulated surveys produced with the Operations Simulator described in Section 2.7.1, the authors illustrate predictions of LSST performance with two examples.
3.1. The Baseline LSST Surveys
- The fundamental basis of the LSST concept is to scan the sky deep, wide, and fast and to obtain a data set that simultaneously satisfies the majority of the science goals.
- The authors present here a specific realization, the so-called "universal cadence," which yields the main deep-wide-fast survey and meets their core science goals.
- At this writing, there is a vigorous discussion of cadence plans in the LSST community, exploring variants and alternatives that enhance various specific science programs, while maintaining the science requirements described in the SRD.
- The main deep-wide-fast survey will use about 90% of the observing time.
- The remaining 10% of the observing time will be used to obtain improved coverage of parameter space such as very deep (r∼26) observations, observations with very short revisit times (∼1 minute), and observations of "special" regions such as the ecliptic plane, Galactic plane, and the Large and Small Magellanic Clouds.
3.1.1. The Main Deep-wide-fast Survey and Its Extensions
- The observing strategy for the main survey will be optimized for the homogeneity of depth and number of visits.
- In times of good seeing and at low air mass, preference is given to r-and i-band observations.
- As often as possible, each field will be observed twice, with visits separated by 15-60 minutes.
- The universal cadence provides most of LSST's power for detecting NEOs and Kuiper Belt objects (KBOs) and naturally incorporates the southern half of the ecliptic within its 18,000 square degrees, with a decl.
- The anticipated total number of visits for a 10 yr LSST survey is about 2.45 million (∼4.9 million 15 s long exposures, summing over the six filters).
3.1.2. Mini-surveys and Deep Drilling Fields
- Roughly 10% of the time will be allocated to other strategies that significantly enhance the scientific return.
- Deep Drilling Fields, with a much higher number of visits (≈2500-4500 in the r band) than the main survey (a median over all fields of 200 visits in the r band), are also visible as small circles.
- The individual sequences would be sensitive to 1% variability on subminute timescales, allowing discovery of planetary eclipses and of interstellar scintillation effects, expected when the light of a background star propagates through a turbulent gas medium (Moniez 2003; Habibi et al. 2011) .
- The LSST has already selected four distant extragalactic survey fields 97 that the project guarantees to observe as Deep Drilling Fields with deeper coverage and more frequent temporal sampling than provided by the standard LSST observing pattern.
- The timing of the visits within the season is illustrated in the figure by calculating the month within the season (shown in the y-axis location in the plot), the night within the month (x-axis location in the plot), and number of visits within each night (a small additional offset in the y-axis).
3.2. Detailed Analysis of Simulated Surveys
- As examples of analysis enabled by the Operations Simulator (Section 2.7.1), the authors describe determination of the completeness of the LSST NEO sample, and estimation of errors expected for trigonometric parallax and proper-motion measurements.
- In both examples, the conclusions crucially depend on the assumed accuracy of the photometry and astrometry, as the authors now describe.
3.2.1. Expected Photometric S/N
- The output of operations simulations is a data stream consisting of a position on the sky and the time of observation, together with observing conditions such as seeing and sky brightness.
- The expected photometric error in magnitudes (roughly the inverse of the S/N) for a single visit can be written as where σ rand is the random photometric error and σ sys is the systematic photometric error (due to, e.g., imperfect modeling of the PSF, but not including uncertainties in the absolute photometric zero-point). ) .
- The constants C m depend on the overall throughput of the instrument and are computed using their current best throughput estimates for optical elements and sensors.
- In all six bands they imply single-visit depths m 5 (also listed in Table 2 ) that lie between the minimum and design specification values from the SRD listed in Table 1 .
- The differences in performance between LSST and, for example, SDSS follow directly from these relations.
3.2.2. The NEO Completeness Analysis
- Using mean asteroid reflectance spectra (DeMeo et al. 2009), combined with the LSST bandpasses, the authors calculate expected magnitudes and colors, assuming that all PHAs are C-type asteroids, of V−m=(.
- The top panels illustrate cumulative completeness for the LSST baseline cadence and MOPS configuration.
3.2.3. The Expected Accuracy of Trigonometric Parallax and Propermotion Measurements
- To model the astrometric errors, the authors need to consider both random and systematic effects.
- Random astrometric errors per visit for a given star are modeled as θ/S/N, with θ=700 mas and S/N determined using Equation (6).
- Systematic and random errors become similar at about r=22, and there are about 100 stars per LSST sensor (0.05 deg 2 ) to this depth (and fainter than the LSST saturation limit at r∼16) even at the Galactic poles.
- The astrometric transformations from pixel to sky coordinates are modeled using low-order polynomials and standard techniques developed at the US Naval Observatory (Monet et al. 2003) .
- Hence, LSST will smoothly extend Gaia's error versus magnitude curve about 4 mag fainter .
3.3. Data Products and Archive Services
- Data collected by the LSST telescope and camera will be automatically processed to data products-catalogs, alerts, and reduced images-by the LSST Data Management system (Section 2.6.3).
- Objects with motions sufficient to cause trailing in a single exposure will be identified and flagged as such when the alerts are broadcast.
- An extended source model-a constrained linear combination of two Sérsic profiles-and a pointsource model with proper motion-will generally be fitted to each detected object.
- As described in Section 3.1.2, approximately 10% of the observing time will be devoted to mini-surveys that do not follow the LSST baseline cadence.
- The authors will make it possible for the end users to create (or use) such user-generated 105 products at the LSST Data Facility, using the services offered within the LSST Science Platform (Section 3.3.1).
3.3.1. The LSST Science Platform
- The LSST Science Platform (Jurić et al. 2017 ) represents LSST's vision for a large-scale astronomical data archive that can enable effective research with data sets of LSST size and complexity.
- The LSST Science Platform will be a set of web applications and services through which the users will access the LSST data products and, if desired, conduct remote analyses or create user-generated products.
- The platform makes this possible through three user-facing aspects: 1. The web Portal, designed to provide the essential data access and visualization services through a simple-to-use website.
- The JupyterLab aspect, which will provide a Jupyter 106 Notebook-like interface and is geared toward enabling next-to-the-data remote analysis.
- This interface will open the possibility for remote access and analysis of the LSST data set using applications that the users are already comfortable with such as TOPCAT (Taylor 2005) , or libraries such as Astropy (Astropy Collaboration et al. 2013; Jenness et al. 2016) .
4. Examples of LSST Science Projects
- The design and optimization of the LSST system leverage its unique capability to scan a large sky area to a faint flux limit in a short amount of time.
- The main product of the LSST system will be a multicolor ugrizy image of about half the sky to unprecedented depth (r∼27.5).
- Each sky position within the main survey area will be observed close to 1000 times, with timescales spanning seven orders of magnitude (from 30 s to 10 yr), and produce roughly 30 trillion photometric measures of celestial sources.
- It is not possible to predict all the science that LSST data will enable.
- The authors now briefly discuss a few projects to give a flavor of anticipated studies, organized by the four science themes that drive the LSST design (although some projects span more than one theme).
4.1. Probing Dark Energy and Dark Matter
- Any given probe constrains degenerate combinations of cosmological parameters, and each probe is affected by different systematics; thus, the combination of probes allows systematics to be calibrated and for degeneracies to be broken.
- The joint analysis of LSST WL and galaxy clustering is particularly powerful in constraining the dynamical behavior of dark energy, i.e., how it evolves with cosmic time or redshift (Hu & Jain 2004; Zhan 2006) .
- The sound horizon at decoupling, which is imprinted on the mass distribution )) from LSST cosmological probes after 1 yr of data (Y1; top) and the full 10 yr survey (Y10; bottom), from each probe individually, and the joint forecast including "Stage III priors" (i.e., Planck, JLA SNe, and BOSS BAOs).
- Figures reproduced with permission at all redshifts and calibrated with the CMB, provides a standard to measure the angular diameter distance as a function of redshift (Eisenstein et al.
- Such a sample will not only provide larger statistics for the study of the Type Ia population in the universe but also be spread across the full 18,000 deg 2 LSST main survey footprint, allowing different probes of the large-scale structure of the low-redshift universe.
4.2. Taking an Inventory of the Solar System
- The small bodies of the solar system, such as main belt asteroids, the Trojan populations of the giant planets, and the KBOs, offer a unique insight into its early stages because they provide samples of the original solid materials of the solar nebula.
- The baseline LSST cadence will result in orbital parameters for several million objects; these will be dominated by main belt asteroids, with light curves and multicolor photometry for a substantial fraction of detected objects.
- The relationship between the gas-to-dust ratio in comets and their dynamical class (and places of formation) is a fundamental, and still unresolved, question in cometary science (see, e.g., A'Hearn et 1995; Bockelée-Morvan & Biver 2017).
- LSST will be some 3 mag more sensitive than current NEO surveys (like Pan-STARRS1) and will cover more sky more often.
4.3. Exploring the Transient Optical Sky
- Time domain science will greatly benefit from LSST's unique capability to simultaneously provide large-area coverage, dense temporal coverage, accurate color information, good image quality, and rapid data reduction and classification.
- LSST will extend the extrasolar planet census to larger distances within the Galaxy, thus enabling detailed studies of planet frequency as a function of stellar metallicity and parent population (e.g., Hartman et al.
- A census of light echoes of historical explosive and eruptive transients in the Milky Way and Local Group through high-resolution time series.
- Time delays between the multiple images of strongly lensed core-collapse SNe can be used to observe the elusive shock breakout phase of the light curve, providing an unprecedented look at the earliest emission from these transients (Suwa 2018).
- Relations between quasar variability properties and luminosity, redshift, rest-frame wavelength, timescale, color, radio-jet emission, black hole mass, and Eddington-normalized luminosity will be defined with massive statistics, including the potential to detect rare but important events such as jet flares and obscuration events.
4.4. Mapping the Milky Way
- The LSST will map the Galaxy in unprecedented detail, and by doing so revolutionize the fields of Galactic astronomy and near-field cosmology.
- Over 97% of all stars eventually exhaust their fuel and cool to become white dwarfs.
- Variations in the initial mass function will be studied as a function of environment (e.g., age and metallicity).
- In summary, the LSST data will revolutionize studies of the Milky Way and the entire Local Group.
- The outermost reaches of the stellar halo are predicted to bear the most unique signatures of their Galaxy's formation (Johnston et al.
4.5. Additional Science Projects
- The experience with any large survey (e.g., SDSS, 2MASS, VISTA, WISE, GALEX, to name but a few) is that much of their most interesting science is often unrelated to the main science drivers and is often unanticipated at the time the survey is designed.
- LSST will enable far more diverse science than encompassed by the four themes that drive the system design.
- The currently available samples (e.g., Greco et al. 2018 ) are highly incomplete, especially in the southern hemisphere .
- Search for strong gravitational lenses to a faint surface brightness limit (e.g., Bartelmann et al.
4.5.1. Synergy with Other Projects
- LSST will not operate in isolation and will greatly benefit from other precursor and coeval data at a variety of wavelengths, depths, and timescales.
- The Pan-STARRS surveys represent a valuable complement to LSST in providing northern sky coverage to a limit fainter than that of SDSS and SkyMapper.
- The LSST data stream will invigorate subsequent investigations by numerous other telescopes that will provide additional temporal, spectral, and spatial resolution coverage.
- The WL analyses from space and from the ground will also be highly complementary and will provide crucial cross-checks of one another.
- LSST will also enable multiwavelength studies of faint optical sources using gamma-ray, X-ray, IR, and radio data.
5. Community Involvement
- LSST has been conceived as a public facility: the database that it will produce, and the associated object catalogs that are generated from that database, will be made available with no proprietary period to the US and Chilean scientific communities, as well as to those international partners who contribute to operations funding.
- The LSST data management system (Section 3.3) will provide user-friendly tools to access this database, support user-initiated queries and data exploration, and carry out scientific analyses on the data, using LSST computers either at the archive facility or at the data access centers.
- The SDSS provides a good example for how the scientific community can be effective in working with large, publicly available astronomical data sets.
- The science collaborations are listed on the LSST web page, together with a description of the application process for each one.
- The LSST science collaborations in particular have helped develop the LSST science case and continue to provide advice on how to optimize their science with choices in cadence, software, and data systems.
6. Educational and Societal Impacts
- The impact and enduring societal significance of LSST will exceed its direct contributions to advances in physics and astronomy.
- LSST is uniquely positioned to have high impact with the interested public, planetariums and science centers, and citizen science projects, as well as middle school through university educational programs.
- The mission of LSST's Education and Public Outreach (EPO) program is to provide worldwide access to a subset of LSST data through accessible and engaging online experiences so anyone can explore the universe and be part of the discovery process.
- A dynamic, immersive web portal will enable members of the public to explore color images of the full LSST sky, examine objects in more detail, view events from the nightly alert stream, learn more about LSST science topics and discoveries, and investigate scientific questions that excite them using real LSST data in online science notebooks.
- The authors will also follow the International Planetarium Society's Data2Dome standard, to maximize the number of platforms that can use their assets.
7. Summary and Conclusions
- Until recently, most astronomical investigations have focused on small samples of cosmic sources or individual objects.
- The LSST will be unique: the combination of large aperture and large field of view, coupled with the needed computation power and database technology, will enable simultaneously fast and wide and deep imaging of the sky, addressing in one sky survey the broad scientific community's needs in both the time domain and deep universe.
- The design, development, and construction effort has been underway since 2006 and will continue through the onset of full survey operations.
- In 2014 LSST transitioned from the design and development phase to construction, and the Associated Universities for Research in Astronomy (AURA) has had formal responsibility for the LSST project since 2011.
- About 20 billion galaxies and a similar number of stars will be detected-for the first time in history, the number of cataloged celestial objects will exceed the number of living people!.
Did you find this useful? Give us your feedback
Citations
2,091 citations
1,270 citations
1,127 citations
1,009 citations
References
16,838 citations
15,988 citations
14,295 citations
12,126 citations
"LSST: from Science Drivers to Refer..." refers background in this paper
...As vividly demonstrated by surveys such as the Sloan Digital Sky Survey (SDSS; York et al. 2000), the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006), and the Galaxy Evolution Explorer (GALEX; Martin et al. 2006), to name but a few, sensitive and accurate multi-color surveys over a large…...
[...]
11,309 citations
Related Papers (5)
Frequently Asked Questions (18)
Q2. What are the main constraints on the nature of dark energy?
Measurements of cosmic shear as a function of redshift allow determination of angular distances versus cosmic time, providing multiple independent constraints on the nature of dark energy.
Q3. What are the key features of the large survey volumes?
Large survey volumes are key to probing dynamical dark energy models (with subhorizon dark energy clustering or anisotropic stresses).
Q4. How many r bands should be used to reach the full depth of the telescope?
The images should reach a depth of at least 24.5 (5σ for point sources) in the r band to reach high completeness down to the 140 m mandate for NEOs.
Q5. What is the next frontier in cosmology?
The next frontier in this field will require measuring the colors of fast transients and probing variability at faint magnitudes.
Q6. How many observations are required to measure the distances to solar neighborhood stars?
In order to measure distances to solar neighborhood stars out to a distance of 300 pc (the thin-disk scale height), with geometric distance accuracy of at least 30%, trigonometric parallax measurements accurate to 1 mas (1σ) are required over 10 yr.
Q7. How many observations are required to measure the metallicity distribution of stars?
To achieve the required proper-motion and parallax accuracy with an assumed astrometric accuracy of 10 mas per observation per coordinate, approximately 1000 separate observations are required.
Q8. What is the significance of the u band?
An SDSS-like u band (Fukugita et al. 1996) is extremely important for separating low-redshift quasars from hot stars and for estimating the metallicities of F/G main-sequence stars.
Q9. What are the remnants of the solar system?
The small-body populations in the solar system, such as asteroids, trans-Neptunian objects (TNOs), and comets, are remnants of its early assembly.
Q10. How many times will the survey yield?
The survey will yield contiguous overlapping imaging of over half the sky in six optical bands, with each sky location visited close to 1000 times over 10 yr.
Q11. How long should the exposure time be?
5. The single-visit exposure time should be less than about a minute to prevent trailing of fast-moving objects and to aid control of various systematic effects induced by the atmosphere.
Q12. How many visits should be made to the ecliptic and Galactic planes?
11. The distribution of visits on the sky should extend over at least ∼18,000 deg2 to obtain the required number of galaxies for WL studies, with attention paid to include “special” regions such as the ecliptic and Galactic planes, and the Large and Small Magellanic Clouds (if in the Southern Hemisphere).
Q13. What is the way to measure the orbital parameters of TNOs?
The images must be well sampled to enable accurate astrometry, with absolute accuracy of at least 0 1 in order to measure orbital parameters of TNOs with enough precision to constrain theoretical models and enable prediction of occultations.
Q14. How many times should the SN Ia light curves be revisited?
7. The revisit time distribution should enable determination of orbits of solar system objects and sample SN light curves every few days, while accommodating constraints set by proper-motion and trigonometric parallax measurements.
Q15. What has made it possible to move beyond the traditional observational paradigm?
Over the past two decades, however, advances in technology have made it possible to move beyond the traditional observational paradigm and to undertake large-scale sky surveys.
Q16. What is the significance of a large number of SNe across the sky?
a large number of SNe across the sky allows one to search for any dependence of dark energy properties on direction, which would be an indicator of new physics.
Q17. How should the image quality be maintained?
2. Image quality should maintain the limit set by the atmosphere (the median free-air seeing is 0 65 in the r band at the chosen site; see Figure 1) and not be degraded appreciably by the hardware.
Q18. What has the evolution of telescopes allowed us to see?
The authors have developed progressively larger telescopes over the past century, allowing us to peer further into space, and further back in time.