scispace - formally typeset
Search or ask a question

Showing papers by "University of Victoria published in 2013"


Book
12 Nov 2013
TL;DR: The Action Research Planner series has a long history. as mentioned in this paper is the sixth edition of a series that began in 1979 with a modestly produced version for education students at Deakin University in Geelong Australia.
Abstract: The Action Research Planner series has a long history. This is the sixth of a series that began in 1979 with a modestly produced version for education students at Deakin University in Geelong Australia. A course was offered as part of an ‘upgrading’ Bachelor of Education degree designed for practising teachers. The intention was to encourage teachers to conduct small action research projects, or preferably, to participate in larger ones, and to report regularly on their action research work and reading throughout the year through a course journal. Each student was also expected to write a critical review of another student’s work, and on an aspect of the action research literature. The early Planners were somewhat restricted by their need to guide assessment tasks required by a course. Nevertheless, the Planners became popular and were used in many projects in several professional fields and community projects outside Deakin University, with varying degrees of success

2,957 citations


01 Jan 2013
TL;DR: The authors assesses long-term projections of climate change for the end of the 21st century and beyond, where the forced signal depends on the scenario and is typically larger than the internal variability of the climate system.
Abstract: This chapter assesses long-term projections of climate change for the end of the 21st century and beyond, where the forced signal depends on the scenario and is typically larger than the internal variability of the climate system. Changes are expressed with respect to a baseline period of 1986–2005, unless otherwise stated.

1,719 citations


Journal ArticleDOI
TL;DR: By clarifying the framework, the purposes of scoping studies are attainable and the definition is enriched, and it is recommended that researchers consider the value of such a team.
Abstract: Scoping studies are increasingly common for broadly searching the literature on a specific topic, yet researchers lack an agreed-upon definition of and framework for the methodology. In 2005, Arksey and O’Malley offered a methodological framework for conducting scoping studies. In their subsequent work, Levac et al. responded to Arksey and O’Malley’s call for advances to their framework. Our paper builds on this collective work to further enhance the methodology. This paper begins with a background on what constitutes a scoping study, followed by a discussion about four primary subjects: (1) the types of questions for which Arksey and O’Malley’s framework is most appropriate, (2) a contribution to the discussion aimed at enhancing the six steps of Arskey and O’Malley’s framework, (3) the strengths and challenges of our experience working with Arksey and O’Malley’s framework as a large, inter-professional team, and (4) lessons learned. Our goal in this paper is to add to the discussion encouraged by Arksey and O’Malley to further enhance this methodology. Performing a scoping study using Arksey and O’Malley’s framework was a valuable process for our research team even if how it was useful was unexpected. Based on our experience, we recommend researchers be aware of their expectations for how Arksey and O’Malley’s framework might be useful in relation to their research question, and remain flexible to clarify concepts and to revise the research question as the team becomes familiar with the literature. Questions portraying comparisons such as between interventions, programs, or approaches seem to be the most suitable to scoping studies. We also suggest assessing the quality of studies and conducting a trial of the method before fully embarking on the charting process in order to ensure consistency. The benefits of engaging a large, inter-professional team such as ours throughout every stage of Arksey and O’Malley’s framework far exceed the challenges and we recommend researchers consider the value of such a team. The strengths include breadth and depth of knowledge each team member brings to the study and time efficiencies. In our experience, the most significant challenges presented to our team were those related to consensus and resource limitations. Effective communication is key to the success of a large group. We propose that by clarifying the framework, the purposes of scoping studies are attainable and the definition is enriched.

1,207 citations


Journal ArticleDOI
18 Apr 2013-Nature
TL;DR: The dominance of transpiration water fluxes in continental evapotranspiration suggests that climate model development should prioritize improvements in simulations of biological fluxes rather than physical (evaporation) fluxes.
Abstract: An analysis of the relative effects of transpiration and evaporation, which can be distinguished by how they affect isotope ratios in water, shows that transpiration is by far the largest water flux from Earth’s continents, representing 80 to 90 per cent of terrestrial evapotranspiration and using half of all solar energy absorbed by land surfaces Water fluxes from the land surface to the atmosphere are divided between evaporation, and transpiration from leaf stomata Although a seemingly basic division between the physical and biological, there is still no consensus on the global partitioning between the two fluxes, resulting in uncertainties as to responses to future climate variations Now, Scott Jasechko and colleagues use the isotopic signatures of transpiration and evaporation from a global data set of large lakes and reveal that enormous quantities of water — as much as 90% of total terrestrial evapotranspiration — are cycled through vegetation via transpiration One conclusion to be drawn from this study is that the accuracy of biological — rather than physical — fluxes should be prioritized in work to improve climate models Renewable fresh water over continents has input from precipitation and losses to the atmosphere through evaporation and transpiration Global-scale estimates of transpiration from climate models are poorly constrained owing to large uncertainties in stomatal conductance and the lack of catchment-scale measurements required for model calibration, resulting in a range of predictions spanning 20 to 65 per cent of total terrestrial evapotranspiration (14,000 to 41,000 km3 per year) (refs 1, 2, 3, 4, 5) Here we use the distinct isotope effects of transpiration and evaporation to show that transpiration is by far the largest water flux from Earth’s continents, representing 80 to 90 per cent of terrestrial evapotranspiration On the basis of our analysis of a global data set of large lakes and rivers, we conclude that transpiration recycles 62,000 ± 8,000 km3 of water per year to the atmosphere, using half of all solar energy absorbed by land surfaces in the process We also calculate CO2 uptake by terrestrial vegetation by connecting transpiration losses to carbon assimilation using water-use efficiency ratios of plants, and show the global gross primary productivity to be 129 ± 32 gigatonnes of carbon per year, which agrees, within the uncertainty, with previous estimates6 The dominance of transpiration water fluxes in continental evapotranspiration suggests that, from the point of view of water resource forecasting, climate model development should prioritize improvements in simulations of biological fluxes rather than physical (evaporation) fluxes

969 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated 20-year temperature and precipitation extremes and their projected future changes in an ensemble of climate models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5), updating a similar study based on the CMIP3 ensemble.
Abstract: Twenty-year temperature and precipitation extremes and their projected future changes are evaluated in an ensemble of climate models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5), updating a similar study based on the CMIP3 ensemble. The projected changes are documented for three radiative forcing scenarios. The performance of the CMIP5 models in simulating 20-year temperature and precipitation extremes is comparable to that of the CMIP3 ensemble. The models simulate late 20th century warm extremes reasonably well, compared to estimates from reanalyses. The model discrepancies in simulating cold extremes are generally larger than those for warm extremes. Simulated late 20th century precipitation extremes are plausible in the extratropics but uncertainty in extreme precipitation in the tropics and subtropics remains very large, both in the models and the observationally-constrained datasets. Consistent with CMIP3 results, CMIP5 cold extremes generally warm faster than warm extremes, mainly in regions where snow and sea-ice retreat with global warming. There are tropical and subtropical regions where warming rates of warm extremes exceed those of cold extremes. Relative changes in the intensity of precipitation extremes generally exceed relative changes in annual mean precipitation. The corresponding waiting times for late 20th century extreme precipitation events are reduced almost everywhere, except for a few subtropical regions. The CMIP5 planetary sensitivity in extreme precipitation is about 6 %/°C, with generally lower values over extratropical land.

906 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the presence of trends in annual maximum daily precipitation time series obtained from a global dataset of 8326 high-quality land-based observing stations with more than 30 years of record over the period from 1900 to 2009.
Abstract: This study investigates the presence of trends in annual maximum daily precipitation time series obtained from a global dataset of 8326 high-quality land-based observing stations with more than 30 years of record over the period from 1900 to 2009. Two complementary statistical techniques were adopted to evaluate the possible nonstationary behavior of these precipitation data. The first was a Mann‐Kendall nonparametric trend test, and it was used to evaluate the existence of monotonic trends. The second was a nonstationary generalized extreme value analysis, and it was used to determine the strength of association between the precipitation extremes and globally averaged near-surface temperature. The outcomes are that statistically significant increasing trends can be detected at the global scale, with close to two-thirds of stations showing increases. Furthermore, there is a statistically significant association with globally averaged near-surface temperature,withthemedianintensityofextremeprecipitationchanginginproportionwithchangesinglobal mean temperature at a rate of between 5.9% and 7.7%K 21 , depending on the method of analysis. This ratio was robust irrespective of record length or time period considered and was not strongly biased by the uneven global coverage of precipitation data. Finally, there is a distinct meridional variation, with the greatest sensitivity occurring in the tropics and higher latitudes and the minima around 138S and 118N. The greatest uncertainty was near the equator because of the limited number of sufficiently long precipitation records, and there remains an urgent need to improve data collection in this region to better constrain future changes in tropical precipitation.

825 citations


Book ChapterDOI
01 Jan 2013
TL;DR: In this paper, the authors present the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems, focusing on four essential topics of selfadaptation: design space for selfadaptive solutions, software engineering processes, from centralized to decentralized control, and practical run-time verification & validation.
Abstract: The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

783 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a finely-binned tomographic weak lensing analysis of the Canada-FranceHawaii Telescope Lensing Survey, CFHTLenS, mitigating contamination to the signal from the presence of intrinsic galaxy alignments via the simultaneous fit of a cosmological model and an intrinsic alignment model.
Abstract: We present a finely-binned tomographic weak lensing analysis of the Canada-FranceHawaii Telescope Lensing Survey, CFHTLenS, mitigating contamination to the signal from the presence of intrinsic galaxy alignments via the simultaneous fit of a cosmological model and an intrinsic alignment model. CFHTLenS spans 154 square degrees in five optical bands, with accurate shear and photometric redshifts for a galaxy sample with a median redshift of zm = 0:70. We estimate the 21 sets of cosmic shear correlation functions associated with six redshift bins, each spanning the angular range of 1:5 < < 35 arcmin. We combine this CFHTLenS data with auxiliary cosmological probes: the cosmic microwave background with data from WMAP7, baryon acoustic oscillations with data from BOSS, and a prior on the Hubble constant from the HST distance ladder. This leads to constraints on the normalisation of the matter power spectrum 8 = 0:799 0:015 and the matter density parameter m = 0:271 0:010 for a flat CDM cosmology. For a flat wCDM cosmology we constrain the dark energy equation of state parameter w = 1:02 0:09. We also provide constraints for curved CDM and wCDM cosmologies. We find the intrinsic alignment contamination to be galaxy-type dependent with a significant intrinsic alignment signal found for early-type galaxies, in contrast to the late-type galaxy sample for which the intrinsic alignment signal is found to be consistent with zero.

688 citations


Journal ArticleDOI
S. Schael1, R. Barate2, R. Brunelière2, D. Buskulic2  +1672 moreInstitutions (143)
TL;DR: In this paper, the results of the four LEP experiments were combined to determine fundamental properties of the W boson and the electroweak theory, including the branching fraction of W and the trilinear gauge-boson self-couplings.

684 citations


01 Apr 2013
TL;DR: In this paper, the authors investigated the presence of trends in annual maximum daily precipitation time series obtained from a global dataset of 8326 high-quality land-based observing stations with more than 30 years of record over the period from 1900 to 2009.
Abstract: This study investigates the presence of trends in annual maximum daily precipitation time series obtained from a global dataset of 8326 high-quality land-based observing stations with more than 30 years of record over the period from 1900 to 2009. Two complementary statistical techniques were adopted to evaluate the possible nonstationary behavior of these precipitation data. The first was a Mann‐Kendall nonparametric trend test, and it was used to evaluate the existence of monotonic trends. The second was a nonstationary generalized extreme value analysis, and it was used to determine the strength of association between the precipitation extremes and globally averaged near-surface temperature. The outcomes are that statistically significant increasing trends can be detected at the global scale, with close to two-thirds of stations showing increases. Furthermore, there is a statistically significant association with globally averaged near-surface temperature,withthemedianintensityofextremeprecipitationchanginginproportionwithchangesinglobal mean temperature at a rate of between 5.9% and 7.7%K 21 , depending on the method of analysis. This ratio was robust irrespective of record length or time period considered and was not strongly biased by the uneven global coverage of precipitation data. Finally, there is a distinct meridional variation, with the greatest sensitivity occurring in the tropics and higher latitudes and the minima around 138S and 118N. The greatest uncertainty was near the equator because of the limited number of sufficiently long precipitation records, and there remains an urgent need to improve data collection in this region to better constrain future changes in tropical precipitation.

615 citations


Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah  +2942 moreInstitutions (201)
TL;DR: In this paper, the spin and parity quantum numbers of the Higgs boson were studied based on the collision data collected by the ATLAS experiment at the LHC, and the results showed that the standard model spin-parity J(...

Journal ArticleDOI
J. P. Lees1, V. Poireau1, V. Tisserand1, E. Grauges2  +337 moreInstitutions (73)
TL;DR: The concept for this analysis is to a large degree based on earlier BABAR work and we acknowledge the guidance provided by M. Mazur as discussed by the authors, who consulted with theorists A. Datta, S. Westhoff,S. Fajfer, J. Kamenik, and I. Nisandzic on the calculations of the charged Higgs contributions to the decay rates.
Abstract: The concept for this analysis is to a large degree based on earlier BABAR work and we acknowledge the guidance provided by M. Mazur. The authors consulted with theorists A. Datta, S. Westhoff, S. Fajfer, J. Kamenik, and I. Nisandzic on the calculations of the charged Higgs contributions to the decay rates. We are grateful for the extraordinary contributions of our PEP-II colleagues in achieving the excellent luminosity and machine conditions that have made this work possible. The success of this project also relied critically on the expertise and dedication of the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and the kind hospitality extended to them. This work is supported by the U.S. Department of Energy and National Science Foundation, the Natural Sciences and Engineering Research Council (Canada), the Commissariat a l'Energie Atomique and Institut National de Physique Nucleaire et de Physique des Particules (France), the Bundesministerium fur Bildung und Forschung and Deutsche Forschungsgemeinschaft (Germany), the Istituto Nazionale di Fisica Nucleare (Italy), the Foundation for Fundamental Research on Matter (Netherlands), the Research Council of Norway, the Ministry of Education and Science of the Russian Federation, Ministerio de Economia y Competitividad (Spain), and the Science and Technology Facilities Council (United Kingdom). Individuals have received support from the Marie-Curie IEF program (European Union) and the A. P. Sloan Foundation (USA).

Journal ArticleDOI
TL;DR: In this paper, a carbon cycle-climate model intercomparison project is presented to quantify responses to emission pulses of different magnitudes injected under different conditions, and the best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2.
Abstract: . The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.

Journal ArticleDOI
TL;DR: In this article, the results from the Herschel Gould Belt survey for the B211/L1495 region in the Taurus molecular cloud were presented, which revealed the structure of the dense, star-forming filament B211 with unprecedented detail, along with the presence of striations perpendicular to the filament.
Abstract: We present first results from the Herschel Gould Belt survey for the B211/L1495 region in the Taurus molecular cloud. Thanks to their high sensitivity and dynamic range, the Herschel images reveal the structure of the dense, star-forming filament B211 with unprecedented detail, along with the presence of striations perpendicular to the filament and generally oriented along the magnetic field direction as traced by optical polarization vectors. Based on the column density and dust temperature maps derived from the Herschel data, we find that the radial density profile of the B211 filament approaches power-law behavior, ρ ∝ r−2.0± 0.4, at large radii and that the temperature profile exhibits a marked drop at small radii. The observed density and temperature profiles of the B211 filament are in good agreement with a theoretical model of a cylindrical filament undergoing gravitational contraction with a polytropic equation of state: P ∝ ργ and T ∝ ργ−1, with γ = 0.97 ± 0.01 < 1 (i.e., not strictly isothermal). The morphology of the column density map, where some of the perpendicular striations are apparently connected to the B211 filament, further suggests that the material may be accreting along the striations onto the main filament. The typical velocities expected for the infalling material in this picture are ~0.5–1 km s-1, which are consistent with the existing kinematical constraints from previous CO observations.

Journal ArticleDOI
TL;DR: In this paper, the magnitude and evolution of parameters that characterize feedbacks in the coupled carbon-climate system are compared across nine Earth system models (ESMs), based on results from biogeochemically, radiatively, and fully coupled simulations in which CO2 increases at a rate of 1% yr−1.
Abstract: The magnitude and evolution of parameters that characterize feedbacks in the coupled carbon–climate system are compared across nine Earth system models (ESMs). The analysis is based on results from biogeochemically, radiatively, and fully coupled simulations in which CO2 increases at a rate of 1% yr−1. These simulations are part of phase 5 of the Coupled Model Intercomparison Project (CMIP5). The CO2 fluxes between the atmosphere and underlying land and ocean respond to changes in atmospheric CO2 concentration and to changes in temperature and other climate variables. The carbon–concentration and carbon–climate feedback parameters characterize the response of the CO2 flux between the atmosphere and the underlying surface to these changes. Feedback parameters are calculated using two different approaches. The two approaches are equivalent and either may be used to calculate the contribution of the feedback terms to diagnosed cumulative emissions. The contribution of carbon–concentration feedback to...

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2942 moreInstitutions (200)
TL;DR: In this article, the production properties and couplings of the recently discovered Higgs boson using the decays into boson pairs were measured using the complete pp collision data sample recorded by the ATLAS experiment at the CERN Large Hadron Collider at centre-of-mass energies of 7 TeV and 8 TeV, corresponding to an integrated luminosity of about 25/fb.

Journal ArticleDOI
TL;DR: The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) as mentioned in this paper investigated the ability to simulate large-scale wetland characteristics and corresponding CH4 emissions.
Abstract: . Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.

Journal ArticleDOI
TL;DR: A meta-analysis of studies that have assessed concordance/discordance of physical activity intention and behaviour at public health guidelines shows the intention-behaviour gap at 48% and the discordance is from intenders who do not act.
Abstract: Objectives The physical activity (PA) intention-behaviour gap is a topic of considerable contemporary research, given that most of our models used to understand physical activity suggest that intention is the proximal antecedent of behavioural enactment. The purpose of this study was to quantify the intention-PA gap at public health guidelines with a meta-analysis of the action control framework. Design Systematic review and meta-analysis. Methods Literature searches were conducted in July 2012 among five key search engines. This search yielded a total of 2,865 potentially relevant records; of these, 10 studies fulfilled the full eligibility criteria (N = 3,899). Results Random-effects meta-analysis procedures with correction for sampling bias were employed in the analysis for estimates of non-intenders who subsequently did not engage in physical activity (21%), non-intenders who subsequently performed physical activity (2%), intenders who were not successful at following through with their PA (36%), and successful intenders (42%). The overall intention-PA gap was 46%. Conclusion These results emphasize the weakness in early intention models for understanding PA and suggest this would be a problem during intervention. Contemporary research that is validating and exploring additional constructs (e.g., self-regulation, automaticity) that augment intention or improving the measurement of motivation seems warranted.

Journal ArticleDOI
TL;DR: In this paper, a community sample of 324 residents from three regions in British Columbia read information either about a climate change impact relevant to their local area, a more global one, or, in a control condition, no message.
Abstract: To help mitigate the negative effects of climate change, citizens’ attitudes and behaviors must be better understood. However, little is known about which factors predict engagement with climate change, and which messaging strategies are most effective. A community sample of 324 residents from three regions in British Columbia read information either about a climate change impact relevant to their local area, a more global one, or, in a control condition, no message. Participants indicated the extent of their climate change engagement, the strength of their attachment to their local area, and demographic information. Three significant unique predictors of climate change engagement emerged: place attachment, receiving the local message, and gender (female). These results provide empirical support for some previously proposed barriers to climate action and suggest guidelines for effective climate change communication.

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, J. Abdallah4  +2897 moreInstitutions (184)
TL;DR: In this article, the luminosity calibration for the ATLAS detector at the LHC during pp collisions at root s = 7 TeV in 2010 and 2011 is presented, and a luminosity uncertainty of delta L/L = +/- 3.5 % is obtained.
Abstract: The luminosity calibration for the ATLAS detector at the LHC during pp collisions at root s = 7 TeV in 2010 and 2011 is presented. Evaluation of the luminosity scale is performed using several luminosity-sensitive detectors, and comparisons are made of the long-term stability and accuracy of this calibration applied to the pp collisions at root s = 7 TeV. A luminosity uncertainty of delta L/L = +/- 3.5 % is obtained for the 47 pb(-1) of data delivered to ATLAS in 2010, and an uncertainty of delta L/L = +/- 1.8 % is obtained for the 5.5 fb(-1) delivered in 2011.

Journal ArticleDOI
TL;DR: This study draws on two sources of knowledge to identify the attributes of a good interdisciplinary team, a published systematic review of the literature on interciplinary team work, and the perceptions of over 253 staff from 11 community rehabilitation and intermediate care teams to propose competency statements that an effective interdisciplinaryteam functioning at a high level should demonstrate.
Abstract: Interdisciplinary team work is increasingly prevalent, supported by policies and practices that bring care closer to the patient and challenge traditional professional boundaries. To date, there has been a great deal of emphasis on the processes of team work, and in some cases, outcomes. This study draws on two sources of knowledge to identify the attributes of a good interdisciplinary team; a published systematic review of the literature on interdisciplinary team work, and the perceptions of over 253 staff from 11 community rehabilitation and intermediate care teams in the UK. These data sources were merged using qualitative content analysis to arrive at a framework that identifies characteristics and proposes ten competencies that support effective interdisciplinary team work. Ten characteristics underpinning effective interdisciplinary team work were identified: positive leadership and management attributes; communication strategies and structures; personal rewards, training and development; appropriate resources and procedures; appropriate skill mix; supportive team climate; individual characteristics that support interdisciplinary team work; clarity of vision; quality and outcomes of care; and respecting and understanding roles. We propose competency statements that an effective interdisciplinary team functioning at a high level should demonstrate.

Journal ArticleDOI
TL;DR: In this article, the age of 55 globular clusters (GCs) for which the Hubble Space Telescope Advanced Camera for Surveys photometry is publicly available is derived for most of them, the assumed distances are based on fits of theoretical zero-age horizontal-branch loci to the lower bound of the observed distributions of HB stars, assuming reddenings from empirical dust maps and metallicities from the latest spectroscopic analyses.
Abstract: Ages have been derived for 55 globular clusters (GCs) for which Hubble Space Telescope Advanced Camera for Surveys photometry is publicly available. For most of them, the assumed distances are based on fits of theoretical zero-age horizontal-branch (ZAHB) loci to the lower bound of the observed distributions of HB stars, assuming reddenings from empirical dust maps and metallicities from the latest spectroscopic analyses. The age of the isochrone that provides the best fit to the stars in the vicinity of the turnoff (TO) is taken to be the best estimate of the cluster age. The morphology of isochrones between the TO and the beginning part of the subgiant branch (SGB) is shown to be nearly independent of age and chemical abundances. For well-defined color-magnitude diagrams (CMDs), the error bar arising just from the fitting of ZAHBs and isochrones is ± 0.25 Gyr, while that associated with distance and chemical abundance uncertainties is ~ ± 1.5-2 Gyr. The oldest GCs in our sample are predicted to have ages of 13.0 Gyr (subject to the aforementioned uncertainties). However, the main focus of this investigation is on relative GC ages. In conflict with recent findings based on the relative main-sequence fitting method, which have been studied in some detail and reconciled with our results, ages are found to vary from mean values of 12.5 Gyr at [Fe/H] – 1.7 to 11 Gyr at [Fe/H] –1. At intermediate metallicities, the age-metallicity relation (AMR) appears to be bifurcated: one branch apparently contains clusters with disk-like kinematics, whereas the other branch, which is displaced to lower [Fe/H] values by 0.6 dex at a fixed age, is populated by clusters with halo-type orbits. The dispersion in age about each component of the AMR is ~ ± 0.5 Gyr. There is no apparent dependence of age on Galactocentric distance (R G) nor is there a clear correlation of HB type with age. As previously discovered in the case of M3 and M13, subtle variations have been found in the slope of the SGB in the CMDs of other metal-poor ([Fe/H] – 1.5) GCs. They have been tentatively attributed to cluster-to-cluster differences in the abundance of helium. Curiously, GCs that have relatively steep M13-like SGBs tend to be massive systems, located at small R G, that show the strongest evidence of in situ formation of multiple stellar populations. The clusters in the other group are typically low-mass systems (with 2-3 exceptions, including M3) that, at the present time, should not be able to retain the matter lost by mass-losing stars due either to the development of GC winds or to ram-pressure stripping by the halo interstellar medium. The apparent separation of the two groups in terms of their present-day gas retention properties is difficult to understand if all GCs were initially ~20 times their current masses. The lowest-mass systems, in particular, may have never been massive enough to retain enough gas to produce a significant population of second-generation stars. In this case, the observed light element abundance variations, which are characteristic of all GCs, were presumably present in the gas out of which the observed cluster stars formed.

Journal ArticleDOI
TL;DR: The role of regulatory processes in collaborative learning and how CSCL environments can be used for shared regulation of learning are examined and two strands of seemingly diverse research are illuminated that lay an important foundation for supporting and researching regulation in CSCL contexts.
Abstract: Despite intensive research in computer-supported collaborative learning (CSCL) over the last decade, there is relatively little research about how groups and individuals in groups engage, sustain, support, and productively regulate collaborative processes. This article examines the role of regulatory processes in collaborative learning and how CSCL environments can be used for shared regulation of learning. First, we establish the importance of regulation processes and introduce three types of regulation contributing to successful collaboration: self-, co-, and socially shared regulation of learning. Second, we illuminate two strands of seemingly diverse research that lay an important foundation for supporting and researching regulation in CSCL contexts establishing that (a) computer-based pedagogical tools used to successfully support regulation in individual learning contexts can be leveraged for collaborative task contexts, and (b) computer-based tools for supporting collaborative knowledge constructio...

Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, J. Abdallah4  +2912 moreInstitutions (183)
TL;DR: Two-particle correlations in relative azimuthal angle and pseudorapidity are measured using the ATLAS detector at the LHC and the resultant Δø correlation is approximately symmetric about π/2, and is consistent with a dominant cos2Δø modulation for all ΣE(T)(Pb) ranges and particle p(T).
Abstract: Two-particle correlations in relative azimuthal angle (Delta phi) and pseudorapidity (Delta eta) are measured in root S-NN = 5.02 TeV p + Pb collisions using the ATLAS detector at the LHC. The measurements are performed using approximately 1 mu b(-1) of data as a function of transverse momentum (p(T)) and the transverse energy (Sigma E-T(Pb)) summed over 3.1 < eta < 4.9 in the direction of the Pb beam. The correlation function, constructed from charged particles, exhibits a long-range (2 < vertical bar Delta eta vertical bar < 5) "near-side" (Delta phi similar to 0) correlation that grows rapidly with increasing Sigma E-T(Pb). A long-range "away-side" (Delta phi similar to pi) correlation, obtained by subtracting the expected contributions from recoiling dijets and other sources estimated using events with small Sigma E-T(Pb), is found to match the near-side correlation in magnitude, shape (in Delta eta and Delta phi) and Sigma E-T(Pb) dependence. The resultant Delta phi correlation is approximately symmetric about pi/2, and is consistent with a dominant cos2 Delta phi modulation for all Sigma E-T(Pb) ranges and particle p(T).

Journal ArticleDOI
TL;DR: In this paper, the authors present data products from the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) data set and demonstrate that their data meet necessary requirements to fully exploit the survey for weak gravitational lensing analyses in connection with photometric redshift studies.
Abstract: We present data products from the Canada–France–Hawaii Telescope Lensing Survey (CFHTLenS). CFHTLenS is based on the Wide component of the Canada–France–Hawaii Telescope Legacy Survey (CFHTLS). It encompasses 154 deg^2 of deep, optical, high-quality, sub-arcsecond imaging data in the five optical filters u*g′r′i′z′. The scientific aims of the CFHTLenS team are weak gravitational lensing studies supported by photometric redshift estimates for the galaxies. This paper presents our data processing of the complete CFHTLenS data set. We were able to obtain a data set with very good image quality and high-quality astrometric and photometric calibration. Our external astrometric accuracy is between 60 and 70 mas with respect to Sloan Digital Sky Survey (SDSS) data, and the internal alignment in all filters is around 30 mas. Our average photometric calibration shows a dispersion of the order of 0.01–0.03 mag for g′r′i′z′ and about 0.04 mag for u* with respect to SDSS sources down to i_(SDSS) ≤ 21. We demonstrate in accompanying papers that our data meet necessary requirements to fully exploit the survey for weak gravitational lensing analyses in connection with photometric redshift studies. In the spirit of the CFHTLS, all our data products are released to the astronomical community via the Canadian Astronomy Data Centre at http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/community/CFHTLens/query.html. We give a description and how-to manuals of the public products which include image pixel data, source catalogues with photometric redshift estimates and all relevant quantities to perform weak lensing studies.

Journal ArticleDOI
TL;DR: It is proposed that dorsal-"where"/ventral-"what" frameworks that have been applied to WM maintenance also apply to executive processes of WM and WM can largely be simplified to a dual selection model.
Abstract: Working memory (WM) enables the online maintenance and manipulation of information and is central to intelligent cognitive functioning. Much research has investigated executive processes of WM in order to understand the operations that make WM “work.” However, there is yet little consensus regarding how executive processes of WM are organized. Here, we used quantitative meta-analysis to summarize data from 36 experiments that examined executive processes of WM. Experiments were categorized into 4 component functions central to WM: protecting WM from external distraction (distractor resistance), preventing irrelevant memories from intruding into WM (intrusion resistance), shifting attention within WM (shifting), and updating the contents of WM (updating). Data were also sorted by content (verbal, spatial, object). Meta-analytic results suggested that rather than dissociating into distinct functions, 2 separate frontal regions were recruited across diverse executive demands. One region was located dorsally in the caudal superior frontal sulcus and was especially sensitive to spatial content. The other was located laterally in the midlateral prefrontal cortex and showed sensitivity to nonspatial content. We propose that dorsal-“where”/ventral-“what” frameworks that have been applied to WM maintenance also apply to executive processes of WM. Hence, WM can largely be simplified to a dual selection model.

Journal ArticleDOI
TL;DR: An emerging research focus on within-person brain signal variability is providing novel insights, and offering highly predictive, complementary, and even orthogonal views of brain function in relation to human lifespan development, cognitive performance, and various clinical conditions.

Journal ArticleDOI
TL;DR: In this article, a likelihood-based method for measuring weak gravitational lensing shear in deep galaxy surveys is described and applied to the Canada-France-Hawaii Telescope (CFHT) Lensing Survey.
Abstract: A likelihood-based method for measuring weak gravitational lensing shear in deep galaxy surveys is described and applied to the Canada–France–Hawaii Telescope (CFHT) Lensing Survey (CFHTLenS) CFHTLenS comprises 154 deg^2 of multi-colour optical data from the CFHT Legacy Survey, with lensing measurements being made in the i′ band to a depth i′_(AB) < 247, for galaxies with signal-to-noise ratio ν_(SN) ≳ 10 The method is based on the lensfit algorithm described in earlier papers, but here we describe a full analysis pipeline that takes into account the properties of real surveys The method creates pixel-based models of the varying point spread function (PSF) in individual image exposures It fits PSF-convolved two-component (disc plus bulge) models to measure the ellipticity of each galaxy, with Bayesian marginalization over model nuisance parameters of galaxy position, size, brightness and bulge fraction The method allows optimal joint measurement of multiple, dithered image exposures, taking into account imaging distortion and the alignment of the multiple measurements We discuss the effects of noise bias on the likelihood distribution of galaxy ellipticity Two sets of image simulations that mirror the observed properties of CFHTLenS have been created to establish the method's accuracy and to derive an empirical correction for the effects of noise bias

Journal ArticleDOI
TL;DR: Findings from recent biomedical studies on animals in the laboratory demonstrate that exposure to predators or predator cues can induce ‘sustained psychological stress’ that is directly comparable to chronic stress in humans, and this has now become one of the most common stressors used in studies of the animal model of post-traumatic stress disorder (PTSD).
Abstract: Summary Predator-induced stress has been used to exemplify the concept of stress for close to a century because almost everyone can imagine the terror of fleeing for one's life from a lion or a tiger. Yet, because it has been assumed to be acute and transitory, predator-induced stress has not been much studied by either comparative physiologists or population ecologists, until relatively recently. The focus in biomedical research has always been on chronic stress in humans, which most comparative physiologists would agree results from ‘sustained psychological stress – linked to mere thoughts’ rather than ‘acute physical crises’ (like surviving a predator attack) or ‘chronic physical challenges’ (such as a shortage of food). Population ecologists have traditionally focused solely on the acute physical crisis of surviving a direct predator attack rather than whether the risk of such an attack may have a sustained effect on other demographic processes (e.g. the birth rate). Demographic experiments have now demonstrated that exposure to predators or predator cues can have sustained effects that extend to affecting birth and survival in free-living animals, and a subset of these have documented associated physiological stress effects. These and similar results have prompted some authors to speak of an ‘ecology of fear’, but others object that ‘the cognitive and emotional aspects of avoiding predation remain unknown’. Recent biomedical studies on animals in the laboratory have demonstrated that exposure to predators or predator cues can induce ‘sustained psychological stress’ that is directly comparable to chronic stress in humans, and this has now in fact become one of the most common stressors used in studies of the animal model of post-traumatic stress disorder (PTSD). We review these recent findings and suggest ways the laboratory techniques developed to measure the ‘neural circuitry of fear’ could be adapted for use on free-living animals in the field, in order to: (i) test whether predator risk induces ‘sustained psychological stress’ in wild animals, comparable to chronic stress in humans and (ii) directly investigate ‘the cognitive and emotional aspects of avoiding predation’ and hence the ‘ecology of fear’.

Journal ArticleDOI
TL;DR: In this paper, the cosmological constraints from 2D weak gravitational lensing by the large-scale structure in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) are presented.
Abstract: We present cosmological constraints from 2D weak gravitational lensing by the large-scale structure in the Canada–France–Hawaii Telescope Lensing Survey (CFHTLenS) which spans 154 deg^2 in five optical bands. Using accurate photometric redshifts and measured shapes for 4.2 million galaxies between redshifts of 0.2 and 1.3, we compute the 2D cosmic shear correlation function over angular scales ranging between 0.8 and 350 arcmin. Using non-linear models of the dark-matter power spectrum, we constrain cosmological parameters by exploring the parameter space with Population Monte Carlo sampling. The best constraints from lensing alone are obtained for the small-scale density-fluctuations amplitude σ_8 scaled with the total matter density Ωm. For a flat Λcold dark matter (ΛCDM) model we obtain σ_8(Ω_m/0.27)0.6 = 0.79 ± 0.03. We combine the CFHTLenS data with 7-year Wilkinson Microwave Anisotropy Probe (WMAP7), baryonic acoustic oscillations (BAO): SDSS-III (BOSS) and a Hubble Space Telescope distance-ladder prior on the Hubble constant to get joint constraints. For a flat ΛCDM model, we find Ω_m = 0.283 ± 0.010 and σ_8 = 0.813 ± 0.014. In the case of a curved wCDM universe, we obtain Ω_m = 0.27 ± 0.03, σ_8 = 0.83 ± 0.04, w0 = −1.10 ± 0.15 and Ω_K = 0.006^(+0.006)_(− 0.004). We calculate the Bayesian evidence to compare flat and curved ΛCDM and dark-energy CDM models. From the combination of all four probes, we find models with curvature to be at moderately disfavoured with respect to the flat case. A simple dark-energy model is indistinguishable from ΛCDM. Our results therefore do not necessitate any deviations from the standard cosmological model.