scispace - formally typeset
Search or ask a question

Showing papers by "University of Victoria published in 2011"


Journal ArticleDOI
TL;DR: Modules for Experiments in Stellar Astrophysics (MESA) as mentioned in this paper is a suite of open source, robust, efficient, thread-safe libraries for a wide range of applications in computational stellar astrophysics.
Abstract: Stellar physics and evolution calculations enable a broad range of research in astrophysics. Modules for Experiments in Stellar Astrophysics (MESA) is a suite of open source, robust, efficient, thread-safe libraries for a wide range of applications in computational stellar astrophysics. A one-dimensional stellar evolution module, MESAstar, combines many of the numerical and physics modules for simulations of a wide range of stellar evolution scenarios ranging from very low mass to massive stars, including advanced evolutionary phases. MESAstar solves the fully coupled structure and composition equations simultaneously. It uses adaptive mesh refinement and sophisticated timestep controls, and supports shared memory parallelism based on OpenMP. State-of-the-art modules provide equation of state, opacity, nuclear reaction rates, element diffusion data, and atmosphere boundary conditions. Each module is constructed as a separate Fortran 95 library with its own explicitly defined public interface to facilitate independent development. Several detailed examples indicate the extensive verification and testing that is continuously performed and demonstrate the wide range of capabilities that MESA possesses. These examples include evolutionary tracks of very low mass stars, brown dwarfs, and gas giant planets to very old ages; the complete evolutionary track of a 1 M ☉ star from the pre-main sequence (PMS) to a cooling white dwarf; the solar sound speed profile; the evolution of intermediate-mass stars through the He-core burning phase and thermal pulses on the He-shell burning asymptotic giant branch phase; the interior structure of slowly pulsating B Stars and Beta Cepheids; the complete evolutionary tracks of massive stars from the PMS to the onset of core collapse; mass transfer from stars undergoing Roche lobe overflow; and the evolution of helium accretion onto a neutron star. MESA can be downloaded from the project Web site (http://mesa.sourceforge.net/).

3,474 citations


Journal ArticleDOI
Norman A. Grogin1, Dale D. Kocevski2, Sandra M. Faber2, Henry C. Ferguson1, Anton M. Koekemoer1, Adam G. Riess3, Viviana Acquaviva4, David M. Alexander5, Omar Almaini6, Matthew L. N. Ashby7, Marco Barden8, Eric F. Bell9, Frédéric Bournaud10, Thomas M. Brown1, Karina Caputi11, Stefano Casertano1, Paolo Cassata12, Marco Castellano, Peter Challis7, Ranga-Ram Chary13, Edmond Cheung2, Michele Cirasuolo14, Christopher J. Conselice6, Asantha Cooray15, Darren J. Croton16, Emanuele Daddi10, Tomas Dahlen1, Romeel Davé17, Duilia F. de Mello18, Duilia F. de Mello19, Avishai Dekel20, Mark Dickinson, Timothy Dolch3, Jennifer L. Donley1, James Dunlop11, Aaron A. Dutton21, David Elbaz10, Giovanni G. Fazio7, Alexei V. Filippenko22, Steven L. Finkelstein23, Adriano Fontana, Jonathan P. Gardner18, Peter M. Garnavich24, Eric Gawiser4, Mauro Giavalisco12, Andrea Grazian, Yicheng Guo12, Nimish P. Hathi25, Boris Häussler6, Philip F. Hopkins22, Jiasheng Huang26, Kuang-Han Huang1, Kuang-Han Huang3, Saurabh Jha4, Jeyhan S. Kartaltepe, Robert P. Kirshner7, David C. Koo2, Kamson Lai2, Kyoung-Soo Lee27, Weidong Li22, Jennifer M. Lotz1, Ray A. Lucas1, Piero Madau2, Patrick J. McCarthy25, Elizabeth J. McGrath2, Daniel H. McIntosh28, Ross J. McLure11, Bahram Mobasher29, Leonidas A. Moustakas13, Mark Mozena2, Kirpal Nandra30, Jeffrey A. Newman31, Sami Niemi1, Kai G. Noeske1, Casey Papovich23, Laura Pentericci, Alexandra Pope12, Joel R. Primack2, Abhijith Rajan1, Swara Ravindranath32, Naveen A. Reddy29, Alvio Renzini, Hans-Walter Rix30, Aday R. Robaina33, Steven A. Rodney3, David J. Rosario30, Piero Rosati34, S. Salimbeni12, Claudia Scarlata35, Brian Siana29, Luc Simard36, Joseph Smidt15, Rachel S. Somerville4, Hyron Spinrad22, Amber Straughn18, Louis-Gregory Strolger37, Olivia Telford31, Harry I. Teplitz13, Jonathan R. Trump2, Arjen van der Wel30, Carolin Villforth1, Risa H. Wechsler38, Benjamin J. Weiner17, Tommy Wiklind39, Vivienne Wild11, Grant W. Wilson12, Stijn Wuyts30, Hao Jing Yan40, Min S. Yun12 
TL;DR: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) as discussed by the authors was designed to document the first third of galactic evolution, from z approx. 8 - 1.5 to test their accuracy as standard candles for cosmology.
Abstract: The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) is designed to document the first third of galactic evolution, from z approx. 8 - 1.5. It will image > 250,000 distant galaxies using three separate cameras on the Hubble Space Tele8cope, from the mid-UV to near-IR, and will find and measure Type Ia supernovae beyond z > 1.5 to test their accuracy as standard candles for cosmology. Five premier multi-wavelength sky regions are selected, each with extensive ancillary data. The use of five widely separated fields mitigates cosmic variance and yields statistically robust and complete samples of galaxies down to a stellar mass of 10(exp 9) solar mass to z approx. 2, reaching the knee of the UV luminosity function of galaxies to z approx. 8. The survey covers approximately 800 square arc minutes and is divided into two parts. The CANDELS/Deep survey (5(sigma) point-source limit H =27.7mag) covers approx. 125 square arcminutes within GOODS-N and GOODS-S. The CANDELS/Wide survey includes GOODS and three additional fields (EGS, COSMOS, and UDS) and covers the full area to a 50(sigma) point-source limit of H ? or approx. = 27.0 mag. Together with the Hubble Ultradeep Fields, the strategy creates a three-tiered "wedding cake" approach that has proven efficient for extragalactic surveys. Data from the survey are non-proprietary and are useful for a wide variety of science investigations. In this paper, we describe the basic motivations for the survey, the CANDELS team science goals and the resulting observational requirements, the field selection and geometry, and the observing design.

2,088 citations


Journal ArticleDOI
17 Feb 2011-Nature
TL;DR: It is shown that human-induced increases in greenhouse gases have contributed to the observed intensification of heavy precipitation events found over approximately two-thirds of data-covered parts of Northern Hemisphere land areas.
Abstract: A significant effect of anthropogenic activities has already been detected in observed trends in temperature and mean precipitation. But to date, no study has formally identified such a human fingerprint on extreme precipitation — an increase in which is one of the central theoretical expectations for a warming climate. Seung-Ki Min and colleagues compare observations and simulations of rainfall between 1951 and 1999 in North America, Europe and northern Asia. They find a statistically significant effect of increased greenhouse gases on observed increases in extreme precipitation events over much of the Northern Hemisphere land area. A significant effect of anthropogenic activities has already been detected in observed trends in temperature and mean precipitation. But so far, no study has formally identified such a human fingerprint on extreme precipitation — an increase in which is one of the central theoretical expectations for a warming climate. This study compares observations and simulations and detects a statistically significant effect of increased greenhouse gases on observed increases in extreme precipitation events over much of the Northern Hemisphere land area. Extremes of weather and climate can have devastating effects on human society and the environment1,2. Understanding past changes in the characteristics of such events, including recent increases in the intensity of heavy precipitation events over a large part of the Northern Hemisphere land area3,4,5, is critical for reliable projections of future changes. Given that atmospheric water-holding capacity is expected to increase roughly exponentially with temperature—and that atmospheric water content is increasing in accord with this theoretical expectation6,7,8,9,10,11—it has been suggested that human-influenced global warming may be partly responsible for increases in heavy precipitation3,5,7. Because of the limited availability of daily observations, however, most previous studies have examined only the potential detectability of changes in extreme precipitation through model–model comparisons12,13,14,15. Here we show that human-induced increases in greenhouse gases have contributed to the observed intensification of heavy precipitation events found over approximately two-thirds of data-covered parts of Northern Hemisphere land areas. These results are based on a comparison of observed and multi-model simulated changes in extreme precipitation over the latter half of the twentieth century analysed with an optimal fingerprinting technique. Changes in extreme precipitation projected by models, and thus the impacts of future changes in extreme precipitation, may be underestimated because models seem to underestimate the observed increase in heavy precipitation with warming16.

1,773 citations


Journal ArticleDOI
TL;DR: A review of gridding indices of extremes can be found in this article, where the authors discuss the obstacles to robustly calculating and analyzing indices and the methods developed to overcome these obstacles.
Abstract: Indices for climate variability and extremes have been used for a long time, often by assessing days with temperature or precipitation observations above or below specific physically-based thresholds. While these indices provided insight into local conditions, few physically based thresholds have relevance in all parts of the world. Therefore, indices of extremes evolved over time and now often focus on relative thresholds that describe features in the tails of the distributions of meteorological variables. In order to help understand how extremes are changing globally, a subset of the wide range of possible indices is now being coordinated internationally which allows the results of studies from different parts of the world to fit together seamlessly. This paper reviews these as well as other indices of extremes and documents the obstacles to robustly calculating and analyzing indices and the methods developed to overcome these obstacles. Gridding indices are necessary in order to compare observations with climate model output. However, gridding indices from daily data are not always straightforward because averaging daily information from many stations tends to dampen gridded extremes. The paper describes recent progress in attribution of the changes in gridded indices of extremes that demonstrates human influence on the probability of extremes. The paper also describes model projections of the future and wraps up with a discussion of ongoing efforts to refine indices of extremes as they are being readied to contribute to the IPCC's Fifth Assessment Report. WIREs Clim Change 2011, 2:851–870. doi: 10.1002/wcc.147 For further resources related to this article, please visit the WIREs website.

1,399 citations


Journal ArticleDOI
TL;DR: Psychologists must work with other scientists, technical experts, and policymakers to help citizens overcome psychological barriers that impede behavioral choices that would facilitate mitigation, adaptation, and environmental sustainability.
Abstract: Most people think climate change and sustainability are important problems, but too few global citizens engaged in high-greenhouse-gas-emitting behavior are engaged in enough mitigating behavior to stem the increasing flow of greenhouse gases and other environmental problems. Why is that? Structural barriers such as a climate-averse infrastructure are part of the answer, but psychological barriers also impede behavioral choices that would facilitate mitigation, adaptation, and environmental sustainability. Although many individuals are engaged in some ameliorative action, most could do more, but they are hindered by seven categories of psychological barriers, or “dragons of inaction”: limited cognition about the problem, ideological worldviews that tend to preclude pro-environmental attitudes and behavior, comparisons with key other people, sunk costs and behavioral momentum, discredence toward experts and authorities, perceived risks of change, and positive but inadequate behavior change. Structural barriers must be removed wherever possible, but this is unlikely to be sufficient. Psychologists must work with other scientists, technical experts, and policymakers to help citizens overcome these psychological barriers.

1,378 citations


Journal ArticleDOI
K. Abe1, N. Abgrall2, Yasuo Ajima, Hiroaki Aihara1  +413 moreInstitutions (53)
TL;DR: The T2K experiment observes indications of ν (μ) → ν(e) appearance in data accumulated with 1.43×10(20) protons on target, and under this hypothesis, the probability to observe six or more candidate events is 7×10(-3), equivalent to 2.5σ significance.
Abstract: The T2K experiment observes indications of nu(mu) -> nu(mu) e appearance in data accumulated with 1.43 x 10(20) protons on target. Six events pass all selection criteria at the far detector. In a three-flavor neutrino oscillation scenario with |Delta m(23)(2)| = 2.4 x 10(-3) eV(2), sin(2)2 theta(23) = 1 and sin(2)2 theta(13) = 0, the expected number of such events is 1.5 +/- 0.3(syst). Under this hypothesis, the probability to observe six or more candidate events is 7 x 10(-3), equivalent to 2.5 sigma significance. At 90% C.L., the data are consistent with 0.03(0.04) < sin(2)2 theta(13) < 0.28(0.34) for delta(CP) = 0 and a normal (inverted) hierarchy.

1,361 citations


Journal ArticleDOI
TL;DR: This article used repeat photography, long-term ecological monitoring and dendrochronology to document shrub expansion in arctic, high-latitude and alpine tundra.
Abstract: Recent research using repeat photography, long-term ecological monitoring and dendrochronology has documented shrub expansion in arctic, high-latitude and alpine tundra

1,153 citations


Journal ArticleDOI
TL;DR: An empirical study on a popular microblog to investigate how social factors such as social support and relationship quality affect the user's intention of future participation in social commerce indicates that both factors play a critical role.
Abstract: Social commerce is emerging as an important platform in e-commerce, primarily due to the increased popularity of social networking sites such as Facebook, Linkedln, and Twitter. To understand the user's social sharing and social shopping intention in social networking Web sites, we conducted an empirical study on a popular microblog to investigate how social factors such as social support and relationship quality affect the user's intention of future participation in social commerce. The results indicate that both factors play a critical role. Social support and Web site quality positively influence the user's intention to use social commerce and to continue using a social networking site. These effects are found to be mediated by the quality of the relationship between the user and the social networking Web site. Our findings not only help researchers interpret why social commerce has become popular, but also assist practitioners in developing better social commerce strategy.

975 citations


Journal ArticleDOI
TL;DR: This review shows that highly enhancing SERS substrates with a high degree of reliability and reproducibility can now be fabricated at relative low cost, indicating that SERS may finally realize its full potential as a very sensitive tool for routine analytical applications.

927 citations


Journal ArticleDOI
TL;DR: This review presents the current knowledge on the role and mechanisms of polysaccharide degradation by Bacteroidetes in their respective habitats and addresses the potential links between gut and environmental bacteria through food consumption.
Abstract: Members of the diverse bacterial phylum Bacteroidetes have colonized virtually all types of habitats on Earth. They are among the major members of the microbiota of animals, especially in the gastrointestinal tract, can act as pathogens and are frequently found in soils, oceans and freshwater. In these contrasting ecological niches, Bacteroidetes are increasingly regarded as specialists for the degradation of high molecular weight organic matter, i.e., proteins and carbohydrates. This review presents the current knowledge on the role and mechanisms of polysaccharide degradation by Bacteroidetes in their respective habitats. The recent sequencing of Bacteroidetes genomes confirms the presence of numerous carbohydrate-active enzymes covering a large spectrum of substrates from plant, algal, and animal origin. Comparative genomics reveal specific Polysaccharide Utilization Loci shared between distantly related members of the phylum, either in environmental or gut-associated species. Moreover, Bacteroidetes genomes appear to be highly plastic and frequently reorganized through genetic rearrangements, gene duplications and lateral gene transfers (LGT), a feature that could have driven their adaptation to distinct ecological niches. Evidence is accumulating that the nature of the diet shapes the composition of the intestinal microbiota. We address the potential links between gut and environmental bacteria through food consumption. LGT can provide gut bacteria with original sets of utensils to degrade otherwise refractory substrates found in the diet. A more complete understanding of the genetic gateways between food-associated environmental species and intestinal microbial communities sheds new light on the origin and evolution of Bacteroidetes as animals' symbionts. It also raises the question as to how the consumption of increasingly hygienic and processed food deprives our microbiota from useful environmental genes and possibly affects our health.

910 citations


Journal ArticleDOI
TL;DR: In this article, the authors combine high redshift Type Ia supernovae from the first 3 years of the Supernova Legacy Survey (SNLS) with other supernova (SN) samples, primarily at lower redshifts, to form a high-quality joint sample of 472 SNe (123 low-$z, 93 SDSS, 242 SNLS, and 14 {\it Hubble Space Telescope}).
Abstract: We combine high redshift Type Ia supernovae from the first 3 years of the Supernova Legacy Survey (SNLS) with other supernova (SN) samples, primarily at lower redshifts, to form a high-quality joint sample of 472 SNe (123 low-$z$, 93 SDSS, 242 SNLS, and 14 {\it Hubble Space Telescope}). SN data alone require cosmic acceleration at >99.9% confidence, including systematic effects. For the dark energy equation of state parameter (assumed constant out to at least $z=1.4$) in a flat universe, we find $w = -0.91^{+0.16}_{-0.20}(\mathrm{stat}) ^{+0.07}_{-0.14} (\mathrm{sys})$ from SNe only, consistent with a cosmological constant. Our fits include a correction for the recently discovered relationship between host-galaxy mass and SN absolute brightness. We pay particular attention to systematic uncertainties, characterizing them using a systematics covariance matrix that incorporates the redshift dependence of these effects, as well as the shape-luminosity and color-luminosity relationships. Unlike previous work, we include the effects of systematic terms on the empirical light-curve models. The total systematic uncertainty is dominated by calibration terms. We describe how the systematic uncertainties can be reduced with soon to be available improved nearby and intermediate-redshift samples, particularly those calibrated onto USNO/SDSS-like systems.

Journal ArticleDOI
TL;DR: In this paper, the second-generation Canadian earth system model (CanESM2) was used to assess the response of the second generation earth system models to historical (1850-2005) and future (2006-2100) natural and anthropogenic forcing.
Abstract: [1] The response of the second-generation Canadian earth system model (CanESM2) to historical (1850–2005) and future (2006–2100) natural and anthropogenic forcing is assessed using the newly-developed representative concentration pathways (RCPs) of greenhouse gases (GHGs) and aerosols. Allowable emissions required to achieve the future atmospheric CO2 concentration pathways, are reported for the RCP 2.6, 4.5 and 8.5 scenarios. For the historical 1850–2005 period, cumulative land plus ocean carbon uptake and, consequently, cumulative diagnosed emissions compare well with observation-based estimates. The simulated historical carbon uptake is somewhat weaker for the ocean and stronger for the land relative to their observation-based estimates. The simulated historical warming of 0.9°C compares well with the observation-based estimate of 0.76 ± 0.19°C. The RCP 2.6, 4.5 and 8.5 scenarios respectively yield warmings of 1.4, 2.3, and 4.9°C and cumulative diagnosed fossil fuel emissions of 182, 643 and 1617 Pg C over the 2006–2100 period. The simulated warming of 2.3°C over the 1850–2100 period in the RCP 2.6 scenario, with the lowest concentration of GHGs, is slightly larger than the 2°C warming target set to avoid dangerous climate change by the 2009 UN Copenhagen Accord. The results of this study suggest that limiting warming to roughly 2°C by the end of this century is unlikely since it requires an immediate ramp down of emissions followed by ongoing carbon sequestration in the second half of this century.

Journal ArticleDOI
09 Dec 2011-Science
TL;DR: The results suggest that the perception of predation risk is itself powerful enough to affect wildlife population dynamics, and should thus be given greater consideration in vertebrate conservation and management.
Abstract: Predator effects on prey demography have traditionally been ascribed solely to direct killing in studies of population ecology and wildlife management. Predators also affect the prey's perception of predation risk, but this has not been thought to meaningfully affect prey demography. We isolated the effects of perceived predation risk in a free-living population of song sparrows by actively eliminating direct predation and used playbacks of predator calls and sounds to manipulate perceived risk. We found that the perception of predation risk alone reduced the number of offspring produced per year by 40%. Our results suggest that the perception of predation risk is itself powerful enough to affect wildlife population dynamics, and should thus be given greater consideration in vertebrate conservation and management.

Journal ArticleDOI
K. Abe1, N. Abgrall2, Hiroaki Aihara1, Yasuo Ajima  +533 moreInstitutions (53)
TL;DR: The T2K experiment as discussed by the authors is a long-baseline neutrino oscillation experiment whose main goal is to measure the last unknown lepton sector mixing angle by observing its appearance in a particle beam generated by the J-PARC accelerator.
Abstract: The T2K experiment is a long-baseline neutrino oscillation experiment Its main goal is to measure the last unknown lepton sector mixing angle {\theta}_{13} by observing { u}_e appearance in a { u}_{\mu} beam It also aims to make a precision measurement of the known oscillation parameters, {\Delta}m^{2}_{23} and sin^{2} 2{\theta}_{23}, via { u}_{\mu} disappearance studies Other goals of the experiment include various neutrino cross section measurements and sterile neutrino searches The experiment uses an intense proton beam generated by the J-PARC accelerator in Tokai, Japan, and is composed of a neutrino beamline, a near detector complex (ND280), and a far detector (Super-Kamiokande) located 295 km away from J-PARC This paper provides a comprehensive review of the instrumentation aspect of the T2K experiment and a summary of the vital information for each subsystem

Journal ArticleDOI
TL;DR: A tutorial on a Bayesian model selection approach that requires only a simple transformation of sum-of-squares values generated by the standard analysis of variance and obviates admonitions never to speak of accepting the null hypothesis.
Abstract: Null-hypothesis significance testing remains the standard inferential tool in cognitive science despite its serious disadvantages. Primary among these is the fact that the resulting probability value does not tell the researcher what he or she usually wants to know: How probable is a hypothesis, given the obtained data? Inspired by developments presented by Wagenmakers (Psychonomic Bulletin & Review, 14, 779–804, 2007), I provide a tutorial on a Bayesian model selection approach that requires only a simple transformation of sum-of-squares values generated by the standard analysis of variance. This approach generates a graded level of evidence regarding which model (e.g., effect absent [null hypothesis] vs. effect present [alternative hypothesis]) is more strongly supported by the data. This method also obviates admonitions never to speak of accepting the null hypothesis. An Excel worksheet for computing the Bayesian analysis is provided as supplemental material.

Journal ArticleDOI
TL;DR: A greater emphasis on manipulative experiments that control tannin levels is required to make further progress on the defensive functions of tannins, which are especially prone to oxidize in insects with high pH guts, forming semiquinone radicals and quinones, as well as other reactive oxygen species.

Journal ArticleDOI
TL;DR: In this paper, the authors performed two-dimensional, point-spread-function-convolved, bulge+disk decompositions in the g and r bandpasses on a sample of 1,123,718 galaxies from the Legacy area of the Sloan Digital Sky Survey Data Release Seven.
Abstract: We perform two-dimensional, point-spread-function-convolved, bulge+disk decompositions in the g and r bandpasses on a sample of 1,123,718 galaxies from the Legacy area of the Sloan Digital Sky Survey Data Release Seven. Four different decomposition procedures are investigated which make improvements to sky background determinations and object deblending over the standard SDSS procedures that lead to more robust structural parameters and integrated galaxy magnitudes and colors, especially in crowded environments. We use a set of science-based quality assurance metrics, namely, the disk luminosity-size relation, the galaxy color-magnitude diagram, and the galaxy central (fiber) colors to show the robustness of our structural parameters. The best procedure utilizes simultaneous, two-bandpass decompositions. Bulge and disk photometric errors remain below 0.1 mag down to bulge and disk magnitudes of g 19 and r 18.5. We also use and compare three different galaxy fitting models: a pure Sersic model, an nb = 4 bulge + disk model, and a Sersic (free nb ) bulge + disk model. The most appropriate model for a given galaxy is determined by the F-test probability. All three catalogs of measured structural parameters, rest-frame magnitudes, and colors are publicly released here. These catalogs should provide an extensive comparison set for a wide range of observational and theoretical studies of galaxies.

Journal ArticleDOI
TL;DR: In this article, the authors present observational constraints on the nature of dark energy using the Supernova Legacy Survey three-year sample (SNLS3) of Guy et al. and Conley et al., and they find that the cosmological constraints derived from these different subsamples are consistent.
Abstract: We present observational constraints on the nature of dark energy using the Supernova Legacy Survey three-year sample (SNLS3) of Guy et al. and Conley et al. We use the 472 Type Ia supernovae (SNe Ia) in this sample, accounting for recently discovered correlations between SN Ia luminosity and host galaxy properties, and include the effects of all identified systematic uncertainties directly in the cosmological fits. Combining the SNLS3 data with the full WMAP7 power spectrum, the Sloan Digital Sky Survey luminous red galaxy power spectrum, and a prior on the Hubble constant H_0 from SHOES, in a flat universe we find Ω_m = 0.269 ± 0.015 and w = –1.061^(+0.069)_(–0.068) (where the uncertainties include all statistical and SN Ia systematic errors)—a 6.5% measure of the dark energy equation-of-state parameter w. The statistical and systematic uncertainties are approximately equal, with the systematic uncertainties dominated by the photometric calibration of the SN Ia fluxes—without these calibration effects, systematics contribute only a ~2% error in w. When relaxing the assumption of flatness, we find Ω_m = 0.271 ± 0.015, Ω_k = –0.002 ± 0.006, and w = –1.069^(+0.091)_(–0.092). Parameterizing the time evolution of w as w(a) = w_0 + w_a (1–a) gives w_0 = –0.905 ± 0.196, w_a = –0.984^(+1.094)_(– 1.097) in a flat universe. All of our results are consistent with a flat, w = –1 universe. The size of the SNLS3 sample allows various tests to be performed with the SNe segregated according to their light curve and host galaxy properties. We find that the cosmological constraints derived from these different subsamples are consistent. There is evidence that the coefficient, β, relating SN Ia luminosity and color, varies with host parameters at >4σ significance (in addition to the known SN luminosity-host relation); however, this has only a small effect on the cosmological results and is currently a subdominant systematic.

Journal ArticleDOI
TL;DR: In this paper, the authors examine why individuals intend to leave their jobs to start business ventures and find that work environments with an unfavorable innovation climate and/or lack of technical excellence incentives influence entrepreneurial intentions, through low job satisfaction.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3034 moreInstitutions (179)
TL;DR: In this article, a search for squarks and gluinos in final states containing jets, missing transverse momentum and no electrons or muons is presented, and the data were recorded by the ATLAS experiment in sqrt(s) = 7 TeV proton-proton collisions at the Large Hadron Collider.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3104 moreInstitutions (190)
TL;DR: In this paper, the particle multiplicity, its dependence on transverse momentum and pseudorapidity and the relationship between the mean transversal momentum and the charged-particle multiplicity are measured.
Abstract: Measurements are presented from proton-proton collisions at centre-of-mass energies of root s = 0.9, 2.36 and 7 TeV recorded with the ATLAS detector at the LHC. Events were collected using a single-arm minimum-bias trigger. The charged-particle multiplicity, its dependence on transverse momentum and pseudorapidity and the relationship between the mean transverse momentum and charged-particle multiplicity are measured. Measurements in different regions of phase space are shown, providing diffraction-reduced measurements as well as more inclusive ones. The observed distributions are corrected to well-defined phase-space regions, using model-independent corrections. The results are compared to each other and to various Monte Carlo (MC) models, including a new AMBT1 pythia6 tune. In all the kinematic regions considered, the particle multiplicities are higher than predicted by the MC models. The central charged-particle multiplicity per event and unit of pseudorapidity, for tracks with p(T) > 100 MeV, is measured to be 3.483 +/- 0.009 (stat) +/- 0.106 (syst) at root s = 0.9 TeV and 5.630 +/- 0.003 (stat) +/- 0.169 (syst) at root s = 7 TeV.


Journal ArticleDOI
TL;DR: This paper performed two-dimensional, Point-Spread-Function-convolved, bulge+disk decompositions in the $g$ and $r$ bandpasses on a sample of 1,123,718 galaxies from the Legacy area of the Sloan Digital Sky Survey Data Release Seven.
Abstract: We perform two-dimensional, Point-Spread-Function-convolved, bulge+disk decompositions in the $g$ and $r$ bandpasses on a sample of 1,123,718 galaxies from the Legacy area of the Sloan Digital Sky Survey Data Release Seven. Four different decomposition procedures are investigated which make improvements to sky background determinations and object deblending over the standard SDSS procedures that lead to more robust structural parameters and integrated galaxy magnitudes and colors, especially in crowded environments. We use a set of science-based quality assurance metrics namely the disk luminosity-size relation, the galaxy color-magnitude diagram and the galaxy central (fiber) colors to show the robustness of our structural parameters. The best procedure utilizes simultaneous, two-bandpass decompositions. Bulge and disk photometric errors remain below 0.1 mag down to bulge and disk magnitudes of $g \simeq 19$ and $r \simeq 18.5$. We also use and compare three different galaxy fitting models: a pure Sersic model, a $n_b=4$ bulge + disk model and a Sersic (free $n_b$) bulge + disk model. The most appropriate model for a given galaxy is determined by the $F$-test probability. All three catalogs of measured structural parameters, rest-frame magnitudes and colors are publicly released here. These catalogs should provide an extensive comparison set for a wide range of observational and theoretical studies of galaxies.

Journal ArticleDOI
Alex Kuo1
TL;DR: The concept and its current place in health care is discussed, and 4 aspects (management, technology, security, and legal) are used to evaluate the opportunities and challenges of this computing model.
Abstract: Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed.

Journal ArticleDOI
TL;DR: In this paper, an integrative model examining the relationships among relational, structural and cognitive dimensions of social capital, and between these dimensions and the cost and innovation performance of the firm was proposed.

Journal ArticleDOI
TL;DR: The size-tunable synthesis of thermodynamically stable (β) NaGdF4 nanoparticles (NPs) below 10 nm is reported, showing great potential as local contrast enhancement probes.
Abstract: We report on the size-tunable synthesis of thermodynamically stable (β) NaGdF4 nanoparticles (NPs) below 10 nm. Paramagnetic β-NaGdF4 NPs of four different sizes (2.5–8.0 nm with a narrow size distribution) were synthesized by simple modifications of the reaction conditions affecting nanoparticle growth dynamics. The synthesized NPs were transferred to water by exchanging the oleate ligands with biocompatible polyvinylpyrrolidone, and analyzed for their ability to affect magnetic resonance (MR) T1 longitudinal relaxivity at 1.5 T. The ionic relaxivity (unit Gd3+ concentration) values increased from 3.0 mM–1 s–1 to 7.2 mM–1 s–1 with decreasing particle size, and the relaxivity of the 2.5-nm particle is almost twice that of clinically used Gd-DTPA (Magnevist) relaxivity. The relaxivity per contrast agent (i.e., per nanoparticle) for these NPs is 200–3000 times larger than the clinical agent, showing great potential as local contrast enhancement probes. The rate of increase in ionic relaxivity with decreasin...

Journal ArticleDOI
TL;DR: In this paper, a sample of 11,060 Sloan Digital Sky Survey galaxies with a close companion (rp < 80 h−170 kpc, ΔV < 200 km−s−1) was used to classify active galactic nuclei (AGN) based either on emission line ratios or on spectral classification as quasar.
Abstract: Galaxy–galaxy interactions are predicted to cause gas inflows leading to enhanced nuclear star formation. This prediction is borne out observationally, and is also supported by the gas-phase metallicity dilution in the inner regions of galaxies in close pairs. In this paper we test the further prediction that the gas inflows lead to enhanced accretion on to the central supermassive black hole, triggering activity in the nucleus. Based on a sample of 11 060 Sloan Digital Sky Survey galaxies with a close companion (rp < 80 h−170 kpc, ΔV < 200 km s−1), we classify active galactic nuclei (AGN) based either on emission line ratios or on spectral classification as a quasar. The AGN fraction in the close pairs sample is compared to a control sample of 110 600 mass- and redshift-matched control galaxies with no nearby companion. We find a clear increase in the AGN fraction in close pairs of galaxies with projected separations < 40 h−170 kpc by up to a factor of 2.5 relative to the control sample [although the enhancement depends on the chosen signal-to-noise ratio (S/N) cut of the sample]. The increase in AGN fraction is strongest in equal-mass galaxy pairings, and weakest in the lower mass component of an unequal-mass pairing. The increased AGN fraction at small separations is accompanied by an enhancement in the number of ‘composite’ galaxies whose spectra are the result of photoionization by both AGN and stars. Our results indicate that AGN activity occurs (at least in some cases) well before final coalescence and concurrently with ongoing star formation. Finally, we find a marked increase at small projected separations of the fraction of pairs in which both galaxies harbour AGN. We demonstrate that the fraction of double AGN exceeds the expected random fraction, indicating that some pairs undergo correlated nuclear activity. We discuss some of the factors that have led to conflicting results in previous studies of AGN in close pairs. Taken together with complementary studies, we favour an interpretation where interactions trigger AGN, but are not the only cause of nuclear activity.

Journal ArticleDOI
TL;DR: Characteristics of psychology that cross content domains and that make the field well suited for providing an understanding of climate change and addressing its challenges are highlighted and ethical imperatives for psychologists' involvement are considered.
Abstract: Global climate change poses one of the greatest challenges facing humanity in this century. This article, which introduces the American Psychologist special issue on global climate change, follows from the report of the American Psychological Association Task Force on the Interface Between Psychology and Global Climate Change. In this article, we place psychological dimensions of climate change within the broader context of human dimensions of climate change by addressing (a) human causes of, consequences of, and responses (adaptation and mitigation) to climate change and (b) the links between these aspects of climate change and cognitive, affective, motivational, interpersonal, and organizational responses and processes. Characteristics of psychology that cross content domains and that make the field well suited for providing an understanding of climate change and addressing its challenges are highlighted. We also consider ethical imperatives for psychologists' involvement and provide suggestions for ways to increase psychologists' contribution to the science of climate change.

Proceedings ArticleDOI
21 May 2011
TL;DR: In this article, the authors analyze data from Stack Overflow to categorize the kinds of questions that are asked, and explore which questions are answered well and which ones remain unanswered.
Abstract: Question and Answer (Q&A) websites, such as Stack Overflow, use social media to facilitate knowledge exchange between programmers and fill archives with millions of entries that contribute to the body of knowledge in software development. Understanding the role of Q&A websites in the documentation landscape will enable us to make recommendations on how individuals and companies can leverage this knowledge effectively. In this paper, we analyze data from Stack Overflow to categorize the kinds of questions that are asked, and to explore which questions are answered well and which ones remain unanswered. Our preliminary findings indicate that Q&A websites are particularly effective at code reviews and conceptual questions. We pose research questions and suggest future work to explore the motivations of programmers that contribute to Q&A websites, and to understand the implications of turning Q&A exchanges into technical mini-blogs through the editing of questions and answers.