scispace - formally typeset
Search or ask a question

Showing papers by "ETH Zurich published in 2009"


Journal ArticleDOI
TL;DR: QUANTUM ESPRESSO as discussed by the authors is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave).
Abstract: QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.

19,985 citations


Journal ArticleDOI
TL;DR: In this paper, Heaton, AG Hogg, KA Hughen, KF Kaiser, B Kromer, SW Manning, RW Reimer, DA Richards, JR Southon, S Talamo, CSM Turney, J van der Plicht, CE Weyhenmeyer
Abstract: Additional co-authors: TJ Heaton, AG Hogg, KA Hughen, KF Kaiser, B Kromer, SW Manning, RW Reimer, DA Richards, JR Southon, S Talamo, CSM Turney, J van der Plicht, CE Weyhenmeyer

13,605 citations


Journal ArticleDOI
TL;DR: A review of recent developments of LCA methods, focusing on some areas where there has been an intense methodological development during the last years, and some of the emerging issues.

2,683 citations


Journal ArticleDOI
TL;DR: The climate change that takes place due to increases in carbon dioxide concentration is largely irreversible for 1,000 years after emissions stop, showing that thermal expansion of the warming ocean provides a conservative lower limit to irreversible global average sea level rise.
Abstract: The severity of damaging human-induced climate change depends not only on the magnitude of the change but also on the potential for irreversibility. This paper shows that the climate change that takes place due to increases in carbon dioxide concentration is largely irreversible for 1,000 years after emissions stop. Following cessation of emissions, removal of atmospheric carbon dioxide decreases radiative forcing, but is largely compensated by slower loss of heat to the ocean, so that atmospheric temperatures do not drop significantly for at least 1,000 years. Among illustrative irreversible impacts that should be expected if atmospheric carbon dioxide concentrations increase from current levels near 385 parts per million by volume (ppmv) to a peak of 450–600 ppmv over the coming century are irreversible dry-season rainfall reductions in several regions comparable to those of the “dust bowl” era and inexorable sea level rise. Thermal expansion of the warming ocean provides a conservative lower limit to irreversible global average sea level rise of at least 0.4–1.0 m if 21st century CO2 concentrations exceed 600 ppmv and 0.6–1.9 m for peak CO2 concentrations exceeding ≈1,000 ppmv. Additional contributions from glaciers and ice sheet contributions to future sea level rise are uncertain but may equal or exceed several meters over the next millennium or longer.

2,604 citations


Journal ArticleDOI
30 Apr 2009-Nature
TL;DR: A comprehensive probabilistic analysis aimed at quantifying GHG emission budgets for the 2000–50 period that would limit warming throughout the twenty-first century to below 2 °C, based on a combination of published distributions of climate system properties and observational constraints is provided.
Abstract: More than 100 countries have adopted a global warming limit of 2 degrees C or below (relative to pre-industrial levels) as a guiding principle for mitigation efforts to reduce climate change risks, impacts and damages. However, the greenhouse gas (GHG) emissions corresponding to a specified maximum warming are poorly known owing to uncertainties in the carbon cycle and the climate response. Here we provide a comprehensive probabilistic analysis aimed at quantifying GHG emission budgets for the 2000-50 period that would limit warming throughout the twenty-first century to below 2 degrees C, based on a combination of published distributions of climate system properties and observational constraints. We show that, for the chosen class of emission scenarios, both cumulative emissions up to 2050 and emission levels in 2050 are robust indicators of the probability that twenty-first century warming will not exceed 2 degrees C relative to pre-industrial temperatures. Limiting cumulative CO(2) emissions over 2000-50 to 1,000 Gt CO(2) yields a 25% probability of warming exceeding 2 degrees C-and a limit of 1,440 Gt CO(2) yields a 50% probability-given a representative estimate of the distribution of climate system properties. As known 2000-06 CO(2) emissions were approximately 234 Gt CO(2), less than half the proven economically recoverable oil, gas and coal reserves can still be emitted up to 2050 to achieve such a goal. Recent G8 Communiques envisage halved global GHG emissions by 2050, for which we estimate a 12-45% probability of exceeding 2 degrees C-assuming 1990 as emission base year and a range of published climate sensitivity distributions. Emissions levels in 2020 are a less robust indicator, but for the scenarios considered, the probability of exceeding 2 degrees C rises to 53-87% if global GHG emissions are still more than 25% above 2000 levels in 2020.

2,432 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that risks to aquatic organisms may currently emanate from nano- Ag, nano-TiO(2), and nano-ZnO in sewage treatment effluents for all considered regions and for nano-Ag in surface waters.
Abstract: Engineered nanomaterials (ENM) are already used in many products and consequently released into environmental compartments. In this study, we calculated predicted environmental concentrations (PEC) based on a probabilistic material flow analysis from a life-cycle perspective of ENM-containing products. We modeled nano-TiO2, nano-ZnO, nano-Ag, carbon nanotubes (CNT), and fullerenes for the U.S., Europe and Switzerland. The environmental concentrations were calculated as probabilistic density functions and were compared to data from ecotoxicological studies. The simulated modes (most frequent values) range from 0.003 ng L−1 (fullerenes) to 21 ng L−1 (nano-TiO2) for surface waters and from 4 ng L−1 (fullerenes) to 4 μg L−1 (nano-TiO2) for sewage treatment effluents. For Europe and the U.S., the annual increase of ENMs on sludge-treated soil ranges from 1 ng kg−1 for fullerenes to 89 μg kg−1 for nano-TiO2. The results of this study indicate that risks to aquatic organisms may currently emanate from nano-Ag, n...

2,258 citations


Book
16 Mar 2009
TL;DR: In this paper, the authors present a combination table of C NMR Spectroscopy, H NMR and Heteronuclear NMR spectroscopy with IR and Mass Spectrometry.
Abstract: Summary Tables.- Combination Tables.- C NMR Spectroscopy.- H NMR Spectroscopy.- Heteronuclear NMR Spectroscopy.- IR Spectroscopy.- Mass Spectrometry.- UV/Vis Spectroscopy.

2,180 citations


Journal ArticleDOI
TL;DR: The role of lakes in carbon cycling and global climate, examine the mechanisms influencing carbon pools and transformations in lakes, and discuss how the metabolism of carbon in the inland waters is likely to change in response to climate.
Abstract: We explore the role of lakes in carbon cycling and global climate, examine the mechanisms influencing carbon pools and transformations in lakes, and discuss how the metabolism of carbon in the inland waters is likely to change in response to climate. Furthermore, we project changes as global climate change in the abundance and spatial distribution of lakes in the biosphere, and we revise the estimate for the global extent of carbon transformation in inland waters. This synthesis demonstrates that the global annual emissions of carbon dioxide from inland waters to the atmosphere are similar in magnitude to the carbon dioxide uptake by the oceans and that the global burial of organic carbon in inland water sediments exceeds organic carbon sequestration on the ocean floor. The role of inland waters in global carbon cycling and climate forcing may be changed by human activities, including construction of impoundments, which accumulate large amounts of carbon in sediments and emit large amounts of methane to the atmosphere. Methane emissions are also expected from lakes on melting permafrost. The synthesis presented here indicates that (1) inland waters constitute a significant component of the global carbon cycle, (2) their contribution to this cycle has significantly changed as a result of human activities, and (3) they will continue to change in response to future climate change causing decreased as well as increased abundance of lakes as well as increases in the number of aquatic impoundments.

2,140 citations


Journal ArticleDOI
TL;DR: Anaemia affects one-quarter of the world’s population and is concentrated in preschool-aged children and women, making it a global public health problem, which makes it difficult to effectively address the problem.
Abstract: Objective To provide current global and regional estimates of anaemia prevalence and number of persons affected in the total population and by population subgroup. Setting and design We used anaemia prevalence data from the WHO Vitamin and Mineral Nutrition Information System for 1993-2005 to generate anaemia prevalence estimates for countries with data representative at the national level or at the first administrative level that is below the national level. For countries without eligible data, we employed regression-based estimates, which used the UN Human Development Index (HDI) and other health indicators. We combined country estimates, weighted by their population, to estimate anaemia prevalence at the global level, by UN Regions and by category of human development. Results Survey data covered 48.8 % of the global population, 76.1 % of preschool-aged children, 69.0 % of pregnant women and 73.5 % of non-pregnant women. The estimated global anaemia prevalence is 24.8 % (95 % CI 22.9, 26.7 %), affecting 1.62 billion people (95 % CI 1.50, 1.74 billion). Estimated anaemia prevalence is 47.4 % (95 % CI 45.7, 49.1 %) in preschool-aged children, 41.8 % (95 % CI 39.9, 43.8 %) in pregnant women and 30.2 % (95 % CI 28.7, 31.6 %) in non-pregnant women. In numbers, 293 million (95 % CI 282, 303 million) preschool-aged children, 56 million (95 % CI 54, 59 million) pregnant women and 468 million (95 % CI 446, 491 million) non-pregnant women are affected. Conclusion Anaemia affects one-quarter of the world's population and is concentrated in preschool-aged children and women, making it a global public health problem. Data on relative contributions of causal factors are lacking, however, which makes it difficult to effectively address the problem.

2,134 citations


Journal ArticleDOI
TL;DR: It is proposed that open source software development is an exemplar of a compound "private-collective" model of innovation that contains elements of both the private investment and the collective action models and can offer society the "best of both worlds" under many conditions.
Abstract: Currently two models of innovation are prevalent in organization science. The "private investment" model assumes returns to the innovator results from private goods and efficient regimes of intellectual property protection. The "collective action" model assumes that under conditions of market failure, innovators collaborate in order to produce a public good. The phenomenon of open source software development shows that users program to solve their own as well as shared technical problems, and freely reveal their innovations without appropriating private returns from selling the software. In this paper we propose that open source software development is an exemplar of a compound model of innovation that contains elements of both the private investment and the collective action models. We describe a new set of research questions this model raises for scholars in organization science. We offer some details regarding the types of data available for open source projects in order to ease access for researchers who are unfamiliar with these, and also offer some advice on conducting empirical studies on open source software development processes.

1,933 citations


Journal ArticleDOI
TL;DR: The purpose of this article is to introduce and comment on the debate about organizational knowledge creation theory, and aim to help scholars make sense of this debate by synthesizing six fundamental questions on organizational knowledgecreation theory.
Abstract: Nonaka's paper [1994. A dynamic theory of organizational knowledge creation. Organ. Sci.5(1) 14--37] contributed to the concepts of “tacit knowledge” and “knowledge conversion” in organization science. We present work that shaped the development of organizational knowledge creation theory and identify two premises upon which more than 15 years of extensive academic work has been conducted: (1) tacit and explicit knowledge can be conceptually distinguished along a continuum; (2) knowledge conversion explains, theoretically and empirically, the interaction between tacit and explicit knowledge. Recently, scholars have raised several issues regarding the understanding of tacit knowledge as well as the interaction between tacit and explicit knowledge in the theory. The purpose of this article is to introduce and comment on the debate about organizational knowledge creation theory. We aim to help scholars make sense of this debate by synthesizing six fundamental questions on organizational knowledge creation theory. Next, we seek to elaborate and advance the theory by responding to questions and incorporating new research. Finally, we discuss implications of our endeavor for organization science.

Posted Content
TL;DR: In this paper, an efficient and reliable methodology for crystal structure prediction, merging ab initio total energy calculations and a specifically devised evolutionary algorithm, was developed, which allows one to predict the most stable crystal structure and a number of low-energy metastable structures for a given compound at any P-T conditions without requiring any experimental input.
Abstract: We have developed an efficient and reliable methodology for crystal structure prediction, merging ab initio total-energy calculations and a specifically devised evolutionary algorithm. This method allows one to predict the most stable crystal structure and a number of low-energy metastable structures for a given compound at any P-T conditions without requiring any experimental input. Extremely high success rate has been observed in a few tens of tests done so far, including ionic, covalent, metallic, and molecular structures with up to 40 atoms in the unit cell. We have been able to resolve some important problems in high-pressure crystallography and report a number of new high-pressure crystal structures. Physical reasons for the success of this methodology are discussed.

Journal ArticleDOI
TL;DR: In this article, improved versions of the relations between supermassive black hole mass (M BH) and host-galaxy bulge velocity dispersion (σ) and luminosity (L; the M-σ and M-L relations), based on 49 M BH measurements and 19 upper limits, were derived.
Abstract: We derive improved versions of the relations between supermassive black hole mass (M BH) and host-galaxy bulge velocity dispersion (σ) and luminosity (L; the M-σ and M-L relations), based on 49 M BH measurements and 19 upper limits. Particular attention is paid to recovery of the intrinsic scatter (e0) in both relations. We find log(M BH/M) = α + βlog(σ/200 km s-1) with (α, β, e0) = (8.12 0.08, 4.24 0.41, 0.44 0.06) for all galaxies and (α, β, e0) = (8.23 0.08, 3.96 0.42, 0.31 0.06) for ellipticals. The results for ellipticals are consistent with previous studies, but the intrinsic scatter recovered for spirals is significantly larger. The scatter inferred reinforces the need for its consideration when calculating local black hole mass function based on the M-σ relation, and further implies that there may be substantial selection bias in studies of the evolution of the M-σ relation. We estimate the M-L relationship as log(M BH/M) = α + βlog(LV /1011 L V) of (α, β, e0) = (8.95 0.11, 1.11 0.18, 0.38 0.09); using only early-type galaxies. These results appear to be insensitive to a wide range of assumptions about the measurement errors and the distribution of intrinsic scatter. We show that culling the sample according to the resolution of the black hole's sphere of influence biases the relations to larger mean masses, larger slopes, and incorrect intrinsic residuals. © 2009. The American Astronomical Society.

Journal ArticleDOI
TL;DR: This article derived improved versions of the relations between supermassive black hole mass and host-galaxy bulge velocity dispersion (sigma) and luminosity (L) (the M-sigma and M-L relations), based on 49 M_BH measurements and 19 upper limits.
Abstract: We derive improved versions of the relations between supermassive black hole mass (M_BH) and host-galaxy bulge velocity dispersion (sigma) and luminosity (L) (the M-sigma and M-L relations), based on 49 M_BH measurements and 19 upper limits. Particular attention is paid to recovery of the intrinsic scatter (epsilon_0) in both relations. We find log(M_BH / M_sun) = alpha + beta * log(sigma / 200 km/s) with (alpha, beta, epsilon_0) = (8.12 +/- 0.08, 4.24 +/- 0.41, 0.44 +/- 0.06) for all galaxies and (alpha, beta, epsilon_0) = (8.23 +/- 0.08, 3.96 +/- 0.42, 0.31 +/- 0.06) for ellipticals. The results for ellipticals are consistent with previous studies, but the intrinsic scatter recovered for spirals is significantly larger. The scatter inferred reinforces the need for its consideration when calculating local black hole mass function based on the M-sigma relation, and further implies that there may be substantial selection bias in studies of the evolution of the M-sigma relation. We estimate the M-L relationship as log(M_BH / M_sun) = alpha + beta * log(L_V / 10^11 L_sun,V) of (alpha, beta, epsilon_0) = (8.95 +/- 0.11, 1.11 +/- 0.18, 0.38 +/- 0.09); using only early-type galaxies. These results appear to be insensitive to a wide range of assumptions about the measurement errors and the distribution of intrinsic scatter. We show that culling the sample according to the resolution of the black hole's sphere of influence biases the relations to larger mean masses, larger slopes, and incorrect intrinsic residuals.

Journal ArticleDOI
10 Jul 2009-Science
TL;DR: Using adult mice in which hippocampal neurogenesis was ablated, this work found specific impairments in spatial discrimination with two behavioral assays: a spatial navigation radial arm maze task and a spatial, but non-navigable, task in the mouse touch screen.
Abstract: The dentate gyrus (DG) of the mammalian hippocampus is hypothesized to mediate pattern separation-the formation of distinct and orthogonal representations of mnemonic information-and also undergoes neurogenesis throughout life. How neurogenesis contributes to hippocampal function is largely unknown. Using adult mice in which hippocampal neurogenesis was ablated, we found specific impairments in spatial discrimination with two behavioral assays: (i) a spatial navigation radial arm maze task and (ii) a spatial, but non-navigable, task in the mouse touch screen. Mice with ablated neurogenesis were impaired when stimuli were presented with little spatial separation, but not when stimuli were more widely separated in space. Thus, newborn neurons may be necessary for normal pattern separation function in the DG of adult mice.

Journal ArticleDOI
TL;DR: A series of routines that can be interfaced with the most popular classical molecular dynamics codes through a simple patching procedure, which leaves the possibility for the user to exploit many different MD engines depending on the system simulated and on the computational resources available.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: A model of dynamic social behavior, inspired by models developed for crowd simulation, is introduced, trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera.
Abstract: Object tracking typically relies on a dynamic model to predict the object's location from its past trajectory. In crowded scenarios a strong dynamic model is particularly important, because more accurate predictions allow for smaller search regions, which greatly simplifies data association. Traditional dynamic models predict the location for each target solely based on its own history, without taking into account the remaining scene objects. Collisions are resolved only when they happen. Such an approach ignores important aspects of human behavior: people are driven by their future destination, take into account their environment, anticipate collisions, and adjust their trajectories at an early stage in order to avoid them. In this work, we introduce a model of dynamic social behavior, inspired by models developed for crowd simulation. The model is trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera. Experiments on real sequences show that accounting for social interactions and scene knowledge improves tracking performance, especially during occlusions.

Journal ArticleDOI
TL;DR: In this article, a chi2 template-fitting method was used and calibrated with large spectroscopic samples from VLT-VIMOS and Keck-DEIMOS.
Abstract: We present accurate photometric redshifts in the 2-deg2 COSMOS field. The redshifts are computed with 30 broad, intermediate, and narrow bands covering the UV (GALEX), Visible-NIR (Subaru, CFHT, UKIRT and NOAO) and mid-IR (Spitzer/IRAC). A chi2 template-fitting method (Le Phare) was used and calibrated with large spectroscopic samples from VLT-VIMOS and Keck-DEIMOS. We develop and implement a new method which accounts for the contributions from emission lines (OII, Hbeta, Halpha and Ly) to the spectral energy distributions (SEDs). The treatment of emission lines improves the photo-z accuracy by a factor of 2.5. Comparison of the derived photo-z with 4148 spectroscopic redshifts (i.e. Delta z = zs - zp) indicates a dispersion of sigma_{Delta z/(1+zs)}=0.007 at i<22.5, a factor of 2-6 times more accurate than earlier photo-z in the COSMOS, CFHTLS and COMBO-17 survey fields. At fainter magnitudes i<24 and z<1.25, the accuracy is sigma_{Delta z/(1+zs)}=0.012. The deep NIR and IRAC coverage enables the photo-z to be extended to z~2 albeit with a lower accuracy (sigma_{Delta z/(1+zs)}=0.06 at i~24). The redshift distribution of large magnitude-selected samples is derived and the median redshift is found to range from z=0.66 at 22

Journal ArticleDOI
TL;DR: The Spectroscopic Imaging Survey in the near-infrared (near-IR) with SINFONI (SINS) of high-redshift galaxies is presented in this article.
Abstract: We present the Spectroscopic Imaging survey in the near-infrared (near-IR) with SINFONI (SINS) of high-redshift galaxies. With 80 objects observed and 63 detected in at least one rest-frame optical nebular emission line, mainly Hα, SINS represents the largest survey of spatially resolved gas kinematics, morphologies, and physical properties of star-forming galaxies at z ~ 1-3. We describe the selection of the targets, the observations, and the data reduction. We then focus on the "SINS Hα sample," consisting of 62 rest-UV/optically selected sources at 1.3 < z < 2.6 for which we targeted primarily the Hα and [N II] emission lines. Only ≈30% of this sample had previous near-IR spectroscopic observations. The galaxies were drawn from various imaging surveys with different photometric criteria; as a whole, the SINS Hα sample covers a reasonable representation of massive M_* ≳ 10^(10) M_☉ star-forming galaxies at z ≈ 1.5-2.5, with some bias toward bluer systems compared to pure K-selected samples due to the requirement of secure optical redshift. The sample spans 2 orders of magnitude in stellar mass and in absolute and specific star formation rates, with median values ≈3 × 10^(10) M_☉, ≈70 M_☉ yr^(–1), and ≈3 Gyr^(–1). The ionized gas distribution and kinematics are spatially resolved on scales ranging from ≈1.5 kpc for adaptive optics assisted observations to typically ≈4-5 kpc for seeing-limited data. The Hα morphologies tend to be irregular and/or clumpy. About one-third of the SINS Hα sample galaxies are rotation-dominated yet turbulent disks, another one-third comprises compact and velocity dispersion-dominated objects, and the remaining galaxies are clear interacting/merging systems; the fraction of rotation-dominated systems increases among the more massive part of the sample. The Hα luminosities and equivalent widths suggest on average roughly twice higher dust attenuation toward the H II regions relative to the bulk of the stars, and comparable current and past-averaged star formation rates.

Journal ArticleDOI
TL;DR: The presented method is useful for environmental decision-support in the production of water-intensive products as well as for environmentally responsible value-chain management.
Abstract: A method for assessing the environmental impacts of freshwater consumption was developed. This method considers damages to three areas of protection: human health, ecosystem quality, and resources. The method can be used within most existing life-cycle impact assessment (LCIA) methods. The relative importance of water consumption was analyzed by integrating the method into the Eco-indicator-99 LCIA method. The relative impact of water consumption in LCIA was analyzed with a case study on worldwide cotton production. The importance of regionalized characterization factors for water use was also examined in the case study. In arid regions, water consumption may dominate the aggregated life-cycle impacts of cotton-textile production. Therefore, the consideration of water consumption is crucial in life-cycle assessment (LCA) studies that include water-intensive products, such as agricultural goods. A regionalized assessment is necessary, since the impacts of water use vary greatly as a function of location. T...

Journal ArticleDOI
TL;DR: It is emphasised that global warming has enabled alien species to expand into regions in which they previously could not survive and reproduce and management practices regarding the occurrence of 'new' species could range from complete eradication to tolerance.
Abstract: Climate change and biological invasions are key processes affecting global biodiversity, yet their effects have usually been considered separately. Here, we emphasise that global warming has enabled alien species to expand into regions in which they previously could not survive and reproduce. Based on a review of climate-mediated biological invasions of plants, invertebrates, fishes and birds, we discuss the ways in which climate change influences biological invasions. We emphasise the role of alien species in a more dynamic context of shifting species' ranges and changing communities. Under these circumstances, management practices regarding the occurrence of 'new' species could range from complete eradication to tolerance and even consideration of the 'new' species as an enrichment of local biodiversity and key elements to maintain ecosystem services.

Journal ArticleDOI
Tanja Manser1
TL;DR: This review examines current research on teamwork in highly dynamic domains of healthcare such as operating rooms, intensive care, emergency medicine, or trauma and resuscitation teams with a focus on aspects relevant to the quality and safety of patient care.
Abstract: Aims/background This review examines current research on teamwork in highly dynamic domains of healthcare such as operating rooms, intensive care, emergency medicine, or trauma and resuscitation teams with a focus on aspects relevant to the quality and safety of patient care Results Evidence from three main areas of research supports the relationship between teamwork and patient safety: (1) Studies investigating the factors contributing to critical incidents and adverse events have shown that teamwork plays an important role in the causation and prevention of adverse events (2) Research focusing on healthcare providers' perceptions of teamwork demonstrated that (a) staff's perceptions of teamwork and attitudes toward safety-relevant team behavior were related to the quality and safety of patient care and (b) perceptions of teamwork and leadership style are associated with staff well-being, which may impact clinician' ability to provide safe patient care (3) Observational studies on teamwork behaviors related to high clinical performance have identified patterns of communication, coordination, and leadership that support effective teamwork Conclusion In recent years, research using diverse methodological approaches has led to significant progress in team research in healthcare The challenge for future research is to further develop and validate instruments for team performance assessment and to develop sound theoretical models of team performance in dynamic medical domains integrating evidence from all three areas of team research identified in this review This will help to improve team training efforts and aid the design of clinical work systems supporting effective teamwork and safe patient care

Journal ArticleDOI
01 May 2009-Science
TL;DR: Using experimental grassland plant communities, it is found that addition of light to the grassland understory prevented the loss of biodiversity caused by eutrophication, and there was no detectable role for competition for soil resources in diversity loss.
Abstract: Human activities have increased the availability of nutrients in terrestrial and aquatic ecosystems. In grasslands, this eutrophication causes loss of plant species diversity, but the mechanism of this loss has been difficult to determine. Using experimental grassland plant communities, we found that addition of light to the grassland understory prevented the loss of biodiversity caused by eutrophication. There was no detectable role for competition for soil resources in diversity loss. Thus, competition for light is a major mechanism of plant diversity loss after eutrophication and explains the particular threat of eutrophication to plant diversity. Our conclusions have implications for grassland management and conservation policy and underscore the need to control nutrient enrichment if plant diversity is to be preserved.

Journal ArticleDOI
TL;DR: ABF swimmers represent the first demonstration of microscopic artificial swimmers that use helical propulsion and are of interest in fundamental research and for biomedical applications.
Abstract: Inspired by the natural design of bacterial flagella, we report artificial bacterial flagella (ABF) that have a comparable shape and size to their organic counterparts and can swim in a controllable fashion using weak applied magnetic fields. The helical swimmer consists of a helical tail resembling the dimensions of a natural flagellum and a thin soft-magnetic “head” on one end. The swimming locomotion of ABF is precisely controlled by three orthogonal electromagnetic coil pairs. Microsphere manipulation is performed, and the thrust force generated by an ABF is analyzed. ABF swimmers represent the first demonstration of microscopic artificial swimmers that use helical propulsion. Self-propelled devices such as these are of interest in fundamental research and for biomedical applications.

Journal ArticleDOI
TL;DR: In this paper, a review of the state-of-the-art in the area of critical chloride thresholding of reinforced concrete is presented, highlighting the strong need for a practice-related test method, and focusing especially on experimental procedures.

Journal ArticleDOI
TL;DR: The International Nanofluid Property Benchmark Exercise (INPBE) as mentioned in this paper was held in 1998, where the thermal conductivity of identical samples of colloidally stable dispersions of nanoparticles or "nanofluids" was measured by over 30 organizations worldwide, using a variety of experimental approaches, including the transient hot wire method, steady state methods, and optical methods.
Abstract: This article reports on the International Nanofluid Property Benchmark Exercise, or INPBE, in which the thermal conductivity of identical samples of colloidally stable dispersions of nanoparticles or “nanofluids,” was measured by over 30 organizations worldwide, using a variety of experimental approaches, including the transient hot wire method, steady-state methods, and optical methods. The nanofluids tested in the exercise were comprised of aqueous and nonaqueous basefluids, metal and metal oxide particles, near-spherical and elongated particles, at low and high particle concentrations. The data analysis reveals that the data from most organizations lie within a relatively narrow band (±10% or less) about the sample average with only few outliers. The thermal conductivity of the nanofluids was found to increase with particle concentration and aspect ratio, as expected from classical theory. There are (small) systematic differences in the absolute values of the nanofluid thermal conductivity among the various experimental approaches; however, such differences tend to disappear when the data are normalized to the measured thermal conductivity of the basefluid. The effective medium theory developed for dispersed particles by Maxwell in 1881 and recently generalized by Nan et al. [J. Appl. Phys. 81, 6692 (1997)], was found to be in good agreement with the experimental data, suggesting that no anomalous enhancement of thermal conductivity was achieved in the nanofluids tested in this exercise.

Journal ArticleDOI
Martin Wild1
TL;DR: A review of the evidence for these changes, their magnitude, their possible causes, their representation in climate models, and their potential implications for climate change can be found in this paper.
Abstract: [1] There is increasing evidence that the amount of solar radiation incident at the Earth's surface is not stable over the years but undergoes significant decadal variations. Here I review the evidence for these changes, their magnitude, their possible causes, their representation in climate models, and their potential implications for climate change. The various studies analyzing long-term records of surface radiation measurements suggest a widespread decrease in surface solar radiation between the 1950s and 1980s (“global dimming”), with a partial recovery more recently at many locations (“brightening”). There are also some indications for an “early brightening” in the first part of the 20th century. These variations are in line with independent long-term observations of sunshine duration, diurnal temperature range, pan evaporation, and, more recently, satellite-derived estimates, which add credibility to the existence of these changes and their larger-scale significance. Current climate models, in general, tend to simulate these decadal variations to a much lesser degree. The origins of these variations are internal to the Earth's atmosphere and not externally forced by the Sun. Variations are not only found under cloudy but also under cloud-free atmospheres, indicative of an anthropogenic contribution through changes in aerosol emissions governed by economic developments and air pollution regulations. The relative importance of aerosols, clouds, and aerosol-cloud interactions may differ depending on region and pollution level. Highlighted are further potential implications of dimming and brightening for climate change, which may affect global warming, the components and intensity of the hydrological cycle, the carbon cycle, and the cryosphere among other climate elements.

Journal ArticleDOI
TL;DR: In this article, the authors merge knowledge management, absorptive capacity, and dynamic capabilities to arrive at an integrative perspective, which considers knowledge exploration, retention, and exploitation inside and outside a firm's boundaries.
Abstract: We merge research into knowledge management, absorptive capacity, and dynamic capabilities to arrive at an integrative perspective, which considers knowledge exploration, retention, and exploitation inside and outside a firm's boundaries. By complementing the concept of absorptive capacity, we advance towards a capability-based framework for open innovation processes. We identify the following six ‘knowledge capacities’ as a firm's critical capabilities of managing internal and external knowledge in open innovation processes: inventive, absorptive, transformative, connective, innovative, and desorptive capacity. ‘Knowledge management capacity’ is a dynamic capability, which reconfigures and realigns the knowledge capacities. It refers to a firm's ability to successfully manage its knowledge base over time. The concept may be regarded as a framework for open innovation, as a complement to absorptive capacity, and as a move towards understanding dynamic capabilities for managing knowledge. On this basis, it contributes to explaining interfirm heterogeneity in knowledge and alliance strategies, organizational boundaries, and innovation performance.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: This work investigates a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.
Abstract: Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, including memory hierarchies, interconnects, instruction sets and variants, and IO configurations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of modern client and server workloads, coupled with the impossibility of statically optimizing an OS for all workloads and hardware variants pose serious challenges for operating system structures.We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from distributed systems. We investigate a new OS structure, the multikernel, that treats the machine as a network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing.We have implemented a multikernel OS to show that the approach is promising, and we describe how traditional scalability problems for operating systems (such as memory management) can be effectively recast using messages and can exploit insights from distributed systems and networking. An evaluation of our prototype on multicore systems shows that, even on present-day machines, the performance of a multikernel is comparable with a conventional OS, and can scale better to support future hardware.

Journal ArticleDOI
17 Jul 2009-Science
TL;DR: It is found that peptide and protein hormones in secretory granules of the endocrine system are stored in an amyloid-like cross–β-sheet–rich conformation, which means functional amyloids in the pituitary and other organs can contribute to normal cell and tissue physiology.
Abstract: Amyloids are highly organized cross–β-sheet–rich protein or peptide aggregates that are associated with pathological conditions including Alzheimer’s disease and type II diabetes. However, amyloids may also have a normal biological function, as demonstrated by fungal prions, which are involved in prion replication, and the amyloid protein Pmel17, which is involved in mammalian skin pigmentation. We found that peptide and protein hormones in secretory granules of the endocrine system are stored in an amyloid-like cross–β-sheet–rich conformation. Thus, functional amyloids in the pituitary and other organs can contribute to normal cell and tissue physiology.