scispace - formally typeset
Search or ask a question
Author

Aneta Siemiginowska

Bio: Aneta Siemiginowska is an academic researcher from Harvard University. The author has contributed to research in topics: Quasar & Luminosity. The author has an hindex of 33, co-authored 102 publications receiving 5553 citations. Previous affiliations of Aneta Siemiginowska include Smithsonian Astrophysical Observatory.


Papers
More filters
Proceedings ArticleDOI
TL;DR: The CIAO (Chandra Interactive Analysis of Observations) software package was first released in 1999 following the launch of the Chandra X-ray Observatory and is used by astronomers across the world to analyze Chandra data as well as data from other telescopes.
Abstract: The CIAO (Chandra Interactive Analysis of Observations) software package was first released in 1999 following the launch of the Chandra X-ray Observatory and is used by astronomers across the world to analyze Chandra data as well as data from other telescopes. From the earliest design discussions, CIAO was planned as a general-purpose scientific data analysis system optimized for X-ray astronomy, and consists mainly of command line tools (allowing easy pipelining and scripting) with a parameter-based interface layered on a flexible data manipulation I/O library. The same code is used for the standard Chandra archive pipeline, allowing users to recalibrate their data in a consistent way. We will discuss the lessons learned from the first six years of the software's evolution. Our initial approach to documentation evolved to concentrate on recipe-based "threads" which have proved very successful. A multi-dimensional abstract approach to data analysis has allowed new capabilities to be added while retaining existing interfaces. A key requirement for our community was interoperability with other data analysis systems, leading us to adopt standard file formats and an architecture which was as robust as possible to the input of foreign data files, as well as re-using a number of external libraries. We support users who are comfortable with coding themselves via a flexible user scripting paradigm, while the availability of tightly constrained pipeline programs are of benefit to less computationally-advanced users. As with other analysis systems, we have found that infrastructure maintenance and re-engineering is a necessary and significant ongoing effort and needs to be planned in to any long-lived astronomy software.

1,145 citations

Journal ArticleDOI
TL;DR: In this article, a sample of optical light curves for 100 quasars, 70 of which have black hole mass estimates, was used to estimate the characteristic timescale and amplitude of flux variations; their approach is not affected by biases introduced from discrete sampling effects.
Abstract: We analyze a sample of optical light curves for 100 quasars, 70 of which have black hole mass estimates. Our sample is the largest and broadest used yet for modeling quasar variability. The sources in our sample have z < 2.8, 1042 λL λ(5100 A) 1046, and 106 M BH/M ☉ 1010. We model the light curves as a continuous time stochastic process, providing a natural means of estimating the characteristic timescale and amplitude of quasar variations. We employ a Bayesian approach to estimate the characteristic timescale and amplitude of flux variations; our approach is not affected by biases introduced from discrete sampling effects. We find that the characteristic timescales strongly correlate with black hole mass and luminosity, and are consistent with disk orbital or thermal timescales. In addition, the amplitude of short-timescale variations is significantly anticorrelated with black hole mass and luminosity. We interpret the optical flux fluctuations as resulting from thermal fluctuations that are driven by an underlying stochastic process, such as a turbulent magnetic field. In addition, the intranight variations in optical flux implied by our empirical model are 0.02 mag, consistent with current microvariability observations of radio-quiet quasars. Our stochastic model is therefore able to unify both long- and short-timescale optical variations in radio-quiet quasars as resulting from the same underlying process, while radio-loud quasars have an additional variability component that operates on timescales 1 day.

670 citations

Journal ArticleDOI
TL;DR: The Chandra Source Catalog (CSC) as mentioned in this paper is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual Xray sources, and is designed to satisfy the needs of a broad-based group of scientists.
Abstract: The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public Advanced CCD Imaging Spectrometer imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents 30''. The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports scientific analysis using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1σ uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of 1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.

527 citations

Journal ArticleDOI
TL;DR: The Chandra Source Catalog (CSC) as mentioned in this paper is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual Xray sources, and is designed to satisfy the needs of a broad-based group of scientists.
Abstract: The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public ACIS imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents <~ 30''. The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports scientific analysis using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1 sigma uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of <~ 1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively.

493 citations

Journal ArticleDOI
TL;DR: A rigorous statistical treatment of hardness ratios that properly deals with detected photons as independent Poisson random variables and correctly deals with the non-Gaussian nature of the error propagation is developed.
Abstract: A commonly used measure to summarize the nature of a photon spectrum is the so-called hardness ratio, which compares the numbers of counts observed in different passbands. The hardness ratio is especially useful to distinguish between and categorize weak sources as a proxy for detailed spectral fitting. However, in this regime classical methods of error propagation fail, and the estimates of spectral hardness become unreliable. Here we develop a rigorous statistical treatment of hardness ratios that properly deals with detected photons as independent Poisson random variables and correctly deals with the non-Gaussian nature of the error propagation. The method is Bayesian in nature and thus can be generalized to carry out a multitude of source-population-based analyses. We verify our method with simulation studies and compare it with the classical method. We apply this method to real-world examples, such as the identification of candidate quiescent low-mass X-ray binaries in globular clusters and tracking the time evolution of a flare on a low-mass star.

359 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: It is concluded that multiple Imputation for Nonresponse in Surveys should be considered as a legitimate method for answering the question of why people do not respond to survey questions.
Abstract: 25. Multiple Imputation for Nonresponse in Surveys. By D. B. Rubin. ISBN 0 471 08705 X. Wiley, Chichester, 1987. 258 pp. £30.25.

3,216 citations

Journal ArticleDOI
TL;DR: The Baryon Oscillation Spectroscopic Survey (BOSS) as discussed by the authors was designed to measure the scale of baryon acoustic oscillations (BAO) in the clustering of matter over a larger volume than the combined efforts of all previous spectroscopic surveys of large-scale structure.
Abstract: The Baryon Oscillation Spectroscopic Survey (BOSS) is designed to measure the scale of baryon acoustic oscillations (BAO) in the clustering of matter over a larger volume than the combined efforts of all previous spectroscopic surveys of large-scale structure. BOSS uses 1.5 million luminous galaxies as faint as i = 19.9 over 10,000 deg2 to measure BAO to redshifts z < 0.7. Observations of neutral hydrogen in the Lyα forest in more than 150,000 quasar spectra (g < 22) will constrain BAO over the redshift range 2.15 < z < 3.5. Early results from BOSS include the first detection of the large-scale three-dimensional clustering of the Lyα forest and a strong detection from the Data Release 9 data set of the BAO in the clustering of massive galaxies at an effective redshift z = 0.57. We project that BOSS will yield measurements of the angular diameter distance dA to an accuracy of 1.0% at redshifts z = 0.3 and z = 0.57 and measurements of H(z) to 1.8% and 1.7% at the same redshifts. Forecasts for Lyα forest constraints predict a measurement of an overall dilation factor that scales the highly degenerate DA (z) and H –1(z) parameters to an accuracy of 1.9% at z ~ 2.5 when the survey is complete. Here, we provide an overview of the selection of spectroscopic targets, planning of observations, and analysis of data and data quality of BOSS.

1,938 citations

15 Mar 1979
TL;DR: In this article, the experimental estimation of parameters for models can be solved through use of the likelihood ratio test, with particular attention to photon counting experiments, and procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply.
Abstract: Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

1,748 citations

Journal ArticleDOI
TL;DR: This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences, including conceptual developments in ML motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross fertilization between the two fields.
Abstract: Machine learning (ML) encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. This article reviews in a selective way the recent research on the interface between machine learning and the physical sciences. This includes conceptual developments in ML motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross fertilization between the two fields. After giving a basic notion of machine learning methods and principles, examples are described of how statistical physics is used to understand methods in ML. This review then describes applications of ML methods in particle physics and cosmology, quantum many-body physics, quantum computing, and chemical and material physics. Research and development into novel computing architectures aimed at accelerating ML are also highlighted. Each of the sections describe recent successes as well as domain-specific methodology and challenges.

1,504 citations