scispace - formally typeset
Search or ask a question
Author

Anastasia C. Laity

Bio: Anastasia C. Laity is an academic researcher from California Institute of Technology. The author has contributed to research in topics: Exoplanet & Virtual observatory. The author has an hindex of 14, co-authored 40 publications receiving 3002 citations. Previous affiliations of Anastasia C. Laity include NASA Exoplanet Science Institute.

Papers
More filters
Journal ArticleDOI
TL;DR: The results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities are presented.
Abstract: This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.

1,324 citations

Journal ArticleDOI
TL;DR: The NASA Exoplanet Archive as mentioned in this paper is a database and toolset for the exoplanet community, containing properties of all published exoplanets, Kepler planet candidates, threshold-crossing events, data validation reports and target stellar parameters, light curves from the Kepler and CoRoT missions and from several ground-based surveys, and spectra and radial velocity measurements from the literature.
Abstract: .We describe the contents and functionality of the NASA Exoplanet Archive, a database and toolset funded by NASA to support astronomers in the exoplanet community. The current content of the database includes interactive tables containing properties of all published exoplanets, Kepler planet candidates, threshold-crossing events, data validation reports and target stellar parameters, light curves from the Kepler and CoRoT missions and from several ground-based surveys, and spectra and radial velocity measurements from the literature. Tools provided to work with these data include a transit ephemeris predictor, both for single planets and for observing locations, light curve viewing and normalization utilities, and a periodogram and phased light curve service. The archive can be accessed at http://exoplanetarchive.ipac.caltech.edu.

708 citations

Journal ArticleDOI
TL;DR: The Palomar 60 inch (1.52 m) telescope as mentioned in this paper was converted from a classic night-assistant operated telescope to a fully robotic facility, which was designed for moderately fast (t ≾ 3 minutes) and sustained (R ≾ mag) observations of gamma-ray burst afterglows and other transient events.
Abstract: We have converted the Palomar 60 inch (1.52 m) telescope from a classic night‐assistant‐operated telescope to a fully robotic facility. The automated system, which has been operational since 2004 September, is designed for moderately fast (t ≾ 3 minutes) and sustained (R ≾ mag) observations of gamma‐ray burst afterglows and other transient events. Routine queue‐scheduled observations can be interrupted in response to electronic notification of transient events. An automated pipeline reduces data in real time, which is then stored on a searchable Web‐based archive for ease of distribution. We describe here the design requirements, hardware and software upgrades, and lessons learned from roboticization. We present an overview of the current system performance as well as plans for future upgrades.

242 citations

Proceedings ArticleDOI
TL;DR: Montage as discussed by the authors is a grid-enabled version of Montage, an astronomical image mosaic service, suitable for large scale processing of the sky, where re-projection jobs can be added to a pool of tasks and performed by as many processors as are available.
Abstract: This paper describes the design of a grid-enabled version of Montage, an astronomical image mosaic service, suitable for large scale processing of the sky. All the re-projection jobs can be added to a pool of tasks and performed by as many processors as are available, exploiting the parallelization inherent in the Montage architecture. We show how we can describe the Montage application in terms of an abstract workflow so that a planning tool such as Pegasus can derive an executable workflow that can be run in the Grid environment. The execution of the workflow is performed by the workflow manager DAGMan and the associated Condor-G. The grid processing will support tiling of images to a manageable size when the input images can no longer be held in memory. Montage will ultimately run operationally on the Teragrid. We describe science applications of Montage, including its application to science product generation by Spitzer Legacy Program teams and large-scale, all-sky image processing projects.

168 citations

Journal ArticleDOI
01 Jul 2009
TL;DR: Montage as discussed by the authors is a portable software toolkit to construct custom, science-grade mosaics that preserve the astrometry and photometry of astronomical sources, which can be run on both single and multi-processor computers, including clusters and grids.
Abstract: Montage is a portable software toolkit to construct custom, science-grade mosaics that preserve the astrometry and photometry of astronomical sources. The user specifies the dataset, wavelength, sky location, mosaic size, coordinate system, projection, and spatial sampling. Montage supports massive astronomical datasets that may be stored in distributed archives. Montage can be run on both single- and multi-processor computers, including clusters and grids. Standard grid tools are used to access remote data or run Montage on remote computers. This paper describes the architecture, algorithms, performance, and usage of Montage as both a software toolkit and a grid portal.

165 citations


Cited by
More filters
Book ChapterDOI
30 Nov 2005
TL;DR: The principal characteristics of the latest release, the Web services-based GT4, which provides significant improvements over previous releases in terms of robustness, performance, usability, documentation, standards compliance, and functionality are summarized.
Abstract: The Globus Toolkit (GT) has been developed since the late 1990s to support the development of service-oriented distributed computing applications and infrastructures. Core GT components address, within a common framework, basic issues relating to security, resource access, resource management, data movement, resource discovery, and so forth. These components enable a broader “Globus ecosystem” of tools and components that build on, or interoperate with, core GT functionality to provide a wide range of useful application-level functions. These tools have in turn been used to develop a wide range of both “Grid” infrastructures and distributed applications. I summarize here the principal characteristics of the latest release, the Web services-based GT4, which provides significant improvements over previous releases in terms of robustness, performance, usability, documentation, standards compliance, and functionality.

1,509 citations

Journal ArticleDOI
TL;DR: The results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities are presented.
Abstract: This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study.

1,324 citations

Journal ArticleDOI
TL;DR: In this article, the authors presented an improved estimate of the occurrence rate of small planets orbiting small stars by searching the full four-year Kepler data set for transiting planets using their own planet detection pipeline and conducting transit injection and recovery simulations to empirically measure the search completeness of their pipeline.
Abstract: We present an improved estimate of the occurrence rate of small planets orbiting small stars by searching the full four-year Kepler data set for transiting planets using our own planet detection pipeline and conducting transit injection and recovery simulations to empirically measure the search completeness of our pipeline. We identified 156 planet candidates, including one object that was not previously identified as a Kepler Object of Interest. We inspected all publicly available follow-up images, observing notes, and centroid analyses, and corrected for the likelihood of false positives. We evaluated the sensitivity of our detection pipeline on a star-by-star basis by injecting 2000 transit signals into the light curve of each target star. For periods shorter than 50 days, we find Earth-size planets (1−1.5 R⊕) and super-Earths (1.5−2 R⊕) per M dwarf. In total, we estimate a cumulative planet occurrence rate of 2.5 ± 0.2 planets per M dwarf with radii 1−4 R⊕ and periods shorter than 200 days. Within a conservatively defined habitable zone (HZ) based on the moist greenhouse inner limit and maximum greenhouse outer limit, we estimate an occurrence rate of Earth-size planets and super-Earths per M dwarf HZ. Adopting the broader insolation boundaries of the recent Venus and early Mars limits yields a higher estimate of Earth-size planets and super-Earths per M dwarf HZ. This suggests that the nearest potentially habitable non-transiting and transiting Earth-size planets are 2.6 ± 0.4 pc and pc away, respectively. If we include super-Earths, these distances diminish to 2.1 ± 0.2 pc and pc.

970 citations

Journal ArticleDOI
TL;DR: The taxonomy provides end users with a mechanism by which they can assess the suitability of workflow in general and how they might use these features to make an informed choice about which workflow system would be a good choice for their particular application.

903 citations

Proceedings ArticleDOI
20 Apr 2010
TL;DR: This paper presents a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost, and shows that PSO can achieve as much as 3 times cost savings as compared to BRS.
Abstract: Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the ‘execution time’. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We compare the cost savings when using PSO and existing ‘Best Resource Selection’ (BRS) algorithm. Our results show that PSO can achieve: a) as much as 3 times cost savings as compared to BRS, and b) good distribution of workload onto resources.

837 citations