scispace - formally typeset
Search or ask a question

Showing papers by "University of Vienna published in 2013"


Journal ArticleDOI
TL;DR: A perspective on the context and evolutionary significance of hybridization during speciation is offered, highlighting issues of current interest and debate and suggesting that the Dobzhansky–Muller model of hybrid incompatibilities requires a broader interpretation.
Abstract: Hybridization has many and varied impacts on the process of speciation. Hybridization may slow or reverse differentiation by allowing gene flow and recombination. It may accelerate speciation via adaptive introgression or cause near-instantaneous speciation by allopolyploidization. It may have multiple effects at different stages and in different spatial contexts within a single speciation event. We offer a perspective on the context and evolutionary significance of hybridization during speciation, highlighting issues of current interest and debate. In secondary contact zones, it is uncertain if barriers to gene flow will be strengthened or broken down due to recombination and gene flow. Theory and empirical evidence suggest the latter is more likely, except within and around strongly selected genomic regions. Hybridization may contribute to speciation through the formation of new hybrid taxa, whereas introgression of a few loci may promote adaptive divergence and so facilitate speciation. Gene regulatory networks, epigenetic effects and the evolution of selfish genetic material in the genome suggest that the Dobzhansky-Muller model of hybrid incompatibilities requires a broader interpretation. Finally, although the incidence of reinforcement remains uncertain, this and other interactions in areas of sympatry may have knock-on effects on speciation both within and outside regions of hybridization.

1,715 citations



Journal ArticleDOI
TL;DR: To survey the burden of liver disease in Europe and its causes 260 epidemiological studies published in the last five years were reviewed and found each of these four major causes is amenable to prevention and treatment.

1,052 citations


Journal ArticleDOI
TL;DR: Recon 2, a community-driven, consensus 'metabolic reconstruction', is described, which is the most comprehensive representation of human metabolism that is applicable to computational modeling and has improved topological and functional features.
Abstract: Multiple models of human metabolism have been reconstructed, but each represents only a subset of our knowledge. Here we describe Recon 2, a community-driven, consensus 'metabolic reconstruction', which is the most comprehensive representation of human metabolism that is applicable to computational modeling. Compared with its predecessors, the reconstruction has improved topological and functional features, including ~2× more reactions and ~1.7× more unique metabolites. Using Recon 2 we predicted changes in metabolite biomarkers for 49 inborn errors of metabolism with 77% accuracy when compared to experimental data. Mapping metabolomic data and drug information onto Recon 2 demonstrates its potential for integrating and analyzing diverse data types. Using protein expression data, we automatically generated a compendium of 65 cell type–specific models, providing a basis for manual curation or investigation of cell-specific metabolic properties. Recon 2 will facilitate many future biomedical studies and is freely available at http://humanmetabolism.org/.

1,002 citations


Book ChapterDOI
01 Jan 2013
TL;DR: In this paper, the authors present the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems, focusing on four essential topics of selfadaptation: design space for selfadaptive solutions, software engineering processes, from centralized to decentralized control, and practical run-time verification & validation.
Abstract: The goal of this roadmap paper is to summarize the state-of-the-art and identify research challenges when developing, deploying and managing self-adaptive software systems. Instead of dealing with a wide range of topics associated with the field, we focus on four essential topics of self-adaptation: design space for self-adaptive solutions, software engineering processes for self-adaptive systems, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems. For each topic, we present an overview, suggest future directions, and focus on selected challenges. This paper complements and extends a previous roadmap on software engineering for self-adaptive systems published in 2009 covering a different set of topics, and reflecting in part on the previous paper. This roadmap is one of the many results of the Dagstuhl Seminar 10431 on Software Engineering for Self-Adaptive Systems, which took place in October 2010.

783 citations


Journal ArticleDOI
Joao Almeida1, Joao Almeida2, Siegfried Schobesberger3, Andreas Kürten1, Ismael K. Ortega3, Oona Kupiainen-Määttä3, Arnaud P. Praplan4, Alexey Adamov3, António Amorim5, F. Bianchi4, Martin Breitenlechner6, A. David2, Josef Dommen4, Neil M. Donahue7, Andrew J. Downard8, Eimear M. Dunne9, Jonathan Duplissy3, Sebastian Ehrhart1, Richard C. Flagan8, Alessandro Franchin3, Roberto Guida2, Jani Hakala3, Armin Hansel6, Martin Heinritzi6, Henning Henschel3, Tuija Jokinen3, Heikki Junninen3, Maija Kajos3, Juha Kangasluoma3, Helmi Keskinen10, Agnieszka Kupc11, Theo Kurtén3, Alexander N. Kvashin12, Ari Laaksonen10, Ari Laaksonen13, Katrianne Lehtipalo3, Markus Leiminger1, Johannes Leppä13, Ville Loukonen3, Vladimir Makhmutov12, Serge Mathot2, Matthew J. McGrath14, Tuomo Nieminen15, Tuomo Nieminen3, Tinja Olenius3, Antti Onnela2, Tuukka Petäjä3, Francesco Riccobono4, Ilona Riipinen16, Matti P. Rissanen3, Linda Rondo1, Taina Ruuskanen3, Filipe Duarte Santos5, Nina Sarnela3, Simon Schallhart3, R. Schnitzhofer6, John H. Seinfeld8, Mario Simon1, Mikko Sipilä3, Mikko Sipilä15, Yuri Stozhkov12, Frank Stratmann17, António Tomé5, Jasmin Tröstl4, Georgios Tsagkogeorgas17, Petri Vaattovaara10, Yrjö Viisanen13, Annele Virtanen10, Aron Vrtala11, Paul E. Wagner11, Ernest Weingartner4, Heike Wex17, Christina Williamson1, Daniela Wimmer1, Daniela Wimmer3, Penglin Ye7, Taina Yli-Juuti3, Kenneth S. Carslaw9, Markku Kulmala3, Markku Kulmala15, Joachim Curtius1, Urs Baltensperger4, Douglas R. Worsnop, Hanna Vehkamäki3, Jasper Kirkby1, Jasper Kirkby2 
17 Oct 2013-Nature
TL;DR: The results show that, in regions of the atmosphere near amine sources, both amines and sulphur dioxide should be considered when assessing the impact of anthropogenic activities on particle formation.
Abstract: Nucleation of aerosol particles from trace atmospheric vapours is thought to provide up to half of global cloud condensation nuclei(1). Aerosols can cause a net cooling of climate by scattering sun ...

738 citations


Journal ArticleDOI
TL;DR: In this paper, Aaronson and Arkhipov's model of computation with photons in integrated optical circuits was implemented and the authors set a benchmark for a type of quantum computer that can potentially outperform a conventional computer by using only a few photons and linear optical elements.
Abstract: The boson-sampling problem is experimentally solved by implementing Aaronson and Arkhipov's model of computation with photons in integrated optical circuits. These results set a benchmark for a type of quantum computer that can potentially outperform a conventional computer by using only a few photons and linear optical elements.

710 citations


Patent
15 Mar 2013
TL;DR: In this paper, a DNA-targeting RNA that comprises a targeting sequence and, together with a modifying polypeptide, provides for site-specific modification of a target DNA and/or a polypeptic associated with the target DNA.
Abstract: The present disclosure provides a DNA-targeting RNA that comprises a targeting sequence and, together with a modifying polypeptide, provides for site-specific modification of a target DNA and/or a polypeptide associated with the target DNA. The present disclosure further provides site-specific modifying polypeptides. The present disclosure further provides methods of site-specific modification of a target DNA and/or a polypeptide associated with the target DNA The present disclosure provides methods of modulating transcription of a target nucleic acid in a target cell, generally involving contacting the target nucleic acid with an enzymatically inactive Cas9 polypeptide and a DNA-targeting RNA. Kits and compositions for carrying out the methods are also provided. The present disclosure provides genetically modified cells that produce Cas9; and Cas9 transgenic non-human multicellular organisms.

702 citations


Journal ArticleDOI
TL;DR: The concept of sliding semilandmarks is introduced and the algorithm can be used to estimate missing data in incomplete specimens and applications and limitations of this method are discussed.
Abstract: Quantitative shape analysis using geometric morphometrics is based on the statistical analysis of landmark coordinates. Many structures, however, cannot be quantified using traditional landmarks. Semilandmarks make it possible to quantify two or three-dimensional homologous curves and sur- faces, and analyse them together with traditional landmarks. Here we first introduce the concept of sliding semilandmarks and discuss applications and limitations of this method. In a second part we show how the sliding semilandmark algorithm can be used to estimate missing data in incomplete specimens. Download the complete "Yellow Book" on "Virtual Morphology and Evolutionary Morphometrics in the new millenium".

646 citations


Journal ArticleDOI
TL;DR: It is recommended that broad-scale models use a CUE value of 0.30, unless there is evidence for lower values as a result of pervasive nutrient limitations, as well as environmental drivers, to predict the CUE of microbial communities.
Abstract: Carbon use efficiency (CUE) is a fundamental parameter for ecological models based on the physiology of microorganisms. CUE determines energy and material flows to higher trophic levels, conversion of plant-produced carbon into microbial products and rates of ecosystem carbon storage. Thermodynamic calculations support a maximum CUE value of ~ 0.60 (CUE max). Kinetic and stoichiometric constraints on microbial growth suggest that CUE in multi-resource limited natural systems should approach ~ 0.3 (CUE max/2). However, the mean CUE values reported for aquatic and terrestrial ecosystems differ by twofold (~ 0.26 vs. ~ 0.55) because the methods used to estimate CUE in aquatic and terrestrial systems generally differ and soil estimates are less likely to capture the full maintenance costs of community metabolism given the difficulty of measurements in water-limited environments. Moreover, many simulation models lack adequate representation of energy spilling pathways and stoichiometric constraints on metabolism, which can also lead to overestimates of CUE. We recommend that broad-scale models use a CUE value of 0.30, unless there is evidence for lower values as a result of pervasive nutrient limitations. Ecosystem models operating at finer scales should consider resource composition, stoichiometric constraints and biomass composition, as well as environmental drivers, to predict the CUE of microbial communities.

602 citations


ReportDOI
TL;DR: The International Linear Collider Technical Design Report (TDR) describes in four volumes the physics case and the design of a 500 GeV center-of-mass energy linear electron-positron collider based on superconducting radio-frequency technology using Niobium cavities as the accelerating structures.
Abstract: Author(s): Baer, Howard; Barklow, Tim; Fujii, Keisuke; Gao, Yuanning; Hoang, Andre; Kanemura, Shinya; List, Jenny; Logan, Heather E; Nomerotski, Andrei; Perelstein, Maxim; Peskin, Michael E; Poschl, Roman; Reuter, Jurgen; Riemann, Sabine; Savoy-Navarro, Aurore; Servant, Geraldine; Tait, Tim MP; Yu, Jaehoon | Abstract: The International Linear Collider Technical Design Report (TDR) describes in four volumes the physics case and the design of a 500 GeV centre-of-mass energy linear electron-positron collider based on superconducting radio-frequency technology using Niobium cavities as the accelerating structures. The accelerator can be extended to 1 TeV and also run as a Higgs factory at around 250 GeV and on the Z0 pole. A comprehensive value estimate of the accelerator is give, together with associated uncertainties. It is shown that no significant technical issues remain to be solved. Once a site is selected and the necessary site-dependent engineering is carried out, construction can begin immediately. The TDR also gives baseline documentation for two high-performance detectors that can share the ILC luminosity by being moved into and out of the beam line in a "push-pull" configuration. These detectors, ILD and SiD, are described in detail. They form the basis for a world-class experimental programme that promises to increase significantly our understanding of the fundamental processes that govern the evolution of the Universe.

Journal ArticleDOI
TL;DR: A hybrid model consisting of logistic regression model, Markov chain (MC), and cellular automata (CA) was designed to improve the performance of the standard logistic regressors of Tehran, Iran to create a probability surface of spatiotemporal states of built-up land use for the years 2006, 2016, and 2026.

Journal ArticleDOI
17 Jan 2013-Nature
TL;DR: The application of an exact technique, full configuration interaction quantum Monte Carlo to a variety of real solids, providing reference many-electron energies that are used to rigorously benchmark the standard hierarchy of quantum-chemical techniques, up to the ‘gold standard’ coupled-cluster ansatz.
Abstract: The properties of all materials arise largely from the quantum mechanics of their constituent electrons under the influence of the electric field of the nuclei. The solution of the underlying many-electron Schrodinger equation is a ‘non-polynomial hard’ problem, owing to the complex interplay of kinetic energy, electron–electron repulsion and the Pauli exclusion principle. The dominant computational method for describing such systems has been density functional theory. Quantum-chemical methods—based on an explicit ansatz for the many-electron wavefunctions and, hence, potentially more accurate—have not been fully explored in the solid state owing to their computational complexity, which ranges from strongly exponential to high-order polynomial in system size. Here we report the application of an exact technique, full configuration interaction quantum Monte Carlo to a variety of real solids, providing reference many-electron energies that are used to rigorously benchmark the standard hierarchy of quantum-chemical techniques, up to the ‘gold standard’ coupled-cluster ansatz, including single, double and perturbative triple particle–hole excitation operators. We show the errors in cohesive energies predicted by this method to be small, indicating the potential of this computationally polynomial scaling technique to tackle current solid-state problems. Recent developments that reduce the computational cost and scaling of wavefunction-based quantum-chemical techniques open the way to the successful application of such techniques to a variety of real-world solids. Computational descriptions of solid-state materials are currently dominated by methods based on density functional theory. An attractive and potentially more accurate approach would be to adopt the wavefunction-based methods of quantum chemistry, although these have not received as much attention because of the computational complexities involved. Now George Booth and colleagues show how recent developments that serve to reduce the computational cost and scaling of such quantum-chemical techniques open the way to their successful application to a variety of real-world solids.

Journal ArticleDOI
TL;DR: In this article, the authors present a series of date and diffusion measurements that document the importance of alpha dose, which they interpret to be correlated with accumulated radiation damage, on He diffusivity.
Abstract: Accurate thermochronologic interpretation of zircon (U-Th)/He dates requires a realistic and practically useful understanding of He diffusion kinetics in natural zircon, ideally across the range of variation that characterize typically dated specimens. Here we present a series of date and diffusion measurements that document the importance of alpha dose, which we interpret to be correlated with accumulated radiation damage, on He diffusivity. This effect is manifest in both date-effective uranium (eU) correlations among zircon grains from single hand samples and in diffusion experiments on pairs of crystallographically oriented slabs of zircon with alpha doses ranging from ∼1016 to 1019 α/g. We interpret these results as due to two contrasting effects of radiation damage in zircon, both of which have much larger effects on He diffusivity and thermal sensitivity of the zircon (U-Th)/He system than crystallographic anisotropy. Between 1.2 × 1016 α/g and 1.4 × 1018 α/g, the frequency factor, D0, measured in the c-axis parallel direction decreases by roughly four orders of magnitude, causing He diffusivity to decrease dramatically (for example by three orders of magnitude at temperatures between 140 and 220 °C). Above ∼2 × 1018 α/g, however, activation energy decreases by a factor of roughly two, and diffusivity increases by about nine orders of magnitude by 8.2 × 1018 α/g. We interpret these two trends with a model that describes the increasing tortuosity of diffusion pathways with progressive damage accumulation, which in turn causes decreases in He diffusivity at low damage. At high damage, increasing diffusivity results from damage zone interconnection and consequential shrinking of the effective diffusion domain size. Our model predicts that the bulk zircon (U-Th)/He closure temperature (Tc) increases from about 140 to 220 °C between alpha doses of 1016 to 1018 α/g, followed by a dramatic decrease in Tc above this dose. Linking this parameterization to one describing damage annealing as a function of time and temperature, we can model the coevolution of damage, He diffusivity, and (U-Th)/He date of zircon. This model generates positive or negative date-eU correlations depending on the extent of damage in each grain and the date-eU sample9s time-temperature history.

Journal ArticleDOI
TL;DR: In this article, the authors quantified, across four countries of contrasting climatic and soil conditions in Europe, how differences in soil food web composition resulting from land use systems (intensive wheat rotation, extensive rotation, and permanent grassland) influence the functioning of soils and the ecosystem services that they deliver.
Abstract: Intensive land use reduces the diversity and abundance of many soil biota, with consequences for the processes that they govern and the ecosystem services that these processes underpin. Relationships between soil biota and ecosystem processes have mostly been found in laboratory experiments and rarely are found in the field. Here, we quantified, across four countries of contrasting climatic and soil conditions in Europe, how differences in soil food web composition resulting from land use systems (intensive wheat rotation, extensive rotation, and permanent grassland) influence the functioning of soils and the ecosystem services that they deliver. Intensive wheat rotation consistently reduced the biomass of all components of the soil food web across all countries. Soil food web properties strongly and consistently predicted processes of C and N cycling across land use systems and geographic locations, and they were a better predictor of these processes than land use. Processes of carbon loss increased with soil food web properties that correlated with soil C content, such as earthworm biomass and fungal/bacterial energy channel ratio, and were greatest in permanent grassland. In contrast, processes of N cycling were explained by soil food web properties independent of land use, such as arbuscular mycorrhizal fungi and bacterial channel biomass. Our quantification of the contribution of soil organisms to processes of C and N cycling across land use systems and geographic locations shows that soil biota need to be included in C and N cycling models and highlights the need to map and conserve soil biodiversity across the world.

Journal ArticleDOI
08 Aug 2013-Nature
TL;DR: The continuous position measurement of a solid-state, optomechanical system fabricated from a silicon microchip and comprising a micromechanical resonator coupled to a nanophotonic cavity is described, observing squeezing of the reflected light’s fluctuation spectrum at a level 4.5 ± 0.2 per cent below that of vacuum noise.
Abstract: Monitoring a mechanical object’s motion, even with the gentle touch of light, fundamentally alters its dynamics. The experimental manifestation of this basic principle of quantum mechanics, its link to the quantum nature of light and the extension of quantum measurement to the macroscopic realm have all received extensive attention over the past half-century. The use of squeezed light, with quantum fluctuations below that of the vacuum field, was proposed nearly three decades ago as a means of reducing the optical read-out noise in precision force measurements. Conversely, it has also been proposed that a continuous measurement of a mirror’s position with light may itself give rise to squeezed light. Such squeezed-light generation has recently been demonstrated in a system of ultracold gas-phase atoms whose centre-of-mass motion is analogous to the motion of a mirror. Here we describe the continuous position measurement of a solid-state, optomechanical system fabricated from a silicon microchip and comprising a micromechanical resonator coupled to a nanophotonic cavity. Laser light sent into the cavity is used to measure the fluctuations in the position of the mechanical resonator at a measurement rate comparable to its resonance frequency and greater than its thermal decoherence rate. Despite the mechanical resonator’s highly excited thermal state (10^4 phonons), we observe, through homodyne detection, squeezing of the reflected light’s fluctuation spectrum at a level 4.5 ± 0.2 percent below that of vacuum noise over a bandwidth of a few megahertz around the mechanical resonance frequency of 28megahertz. With further device improvements, on-chip squeezing at significant levels should be possible, making such integrated microscale devices well suited for precision metrology applications.

Journal ArticleDOI
09 May 2013-Nature
TL;DR: In this article, the authors used a highly efficient source of photon pairs and superconducting transition-edge sensors in a Bell inequality violation experiment with entangled photons, making the photon the first physical system for which all the main loopholes have been closed.
Abstract: The fair-sampling loophole is closed in a Bell inequality violation experiment with entangled photons, making the photon the first physical system for which all the main loopholes have been closed. So-called Bell experiments are used to discriminate between classical ('local realistic') and quantum models of measurable phenomena. In practice, they are subject to various loopholes (arising from non-ideal experimental conditions) that can render the results inconclusive. These authors used a highly efficient source of photon pairs and superconducting transition-edge sensors in a Bell inequality experiment that closes the 'fair-sampling' loophole for entangled photons. The results conflict with local realism, while making the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments. The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint—namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light1,2. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble3. To do this, we use the Eberhard form of Bell’s inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms4. Technical improvements of the photon source5,6 and high-efficiency transition-edge sensors7 were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

Journal ArticleDOI
08 May 2013-PLOS ONE
TL;DR: A new conceptual model for ecosystem risk assessment founded on a synthesis of relevant ecological theories is presented, providing a consistent, practical and theoretically grounded framework for establishing a systematic Red List of the world’s ecosystems.
Abstract: An understanding of risks to biodiversity is needed for planning action to slow current rates of decline and secure ecosystem services for future human use. Although the IUCN Red List criteria provide an effective assessment protocol for species, a standard global assessment of risks to higher levels of biodiversity is currently limited. In 2008, IUCN initiated development of risk assessment criteria to support a global Red List of ecosystems. We present a new conceptual model for ecosystem risk assessment founded on a synthesis of relevant ecological theories. To support the model, we review key elements of ecosystem definition and introduce the concept of ecosystem collapse, an analogue of species extinction. The model identifies four distributional and functional symptoms of ecosystem risk as a basis for assessment criteria: A) rates of decline in ecosystem distribution; B) restricted distributions with continuing declines or threats; C) rates of environmental (abiotic) degradation; and D) rates of disruption to biotic processes. A fifth criterion, E) quantitative estimates of the risk of ecosystem collapse, enables integrated assessment of multiple processes and provides a conceptual anchor for the other criteria. We present the theoretical rationale for the construction and interpretation of each criterion. The assessment protocol and threat categories mirror those of the IUCN Red List of species. A trial of the protocol on terrestrial, subterranean, freshwater and marine ecosystems from around the world shows that its concepts are workable and its outcomes are robust, that required data are available, and that results are consistent with assessments carried out by local experts and authorities. The new protocol provides a consistent, practical and theoretically grounded framework for establishing a systematic Red List of the world’s ecosystems. This will complement the Red List of species and strengthen global capacity to report on and monitor the status of biodiversity

Journal ArticleDOI
TL;DR: In this paper, the most widely used empirical oxygen calibrations, O3N2 and N2, by using new direct abundance measurements are reviewed, and the expected uncertainty of these calibrations as a function of the index value or abundance derived is analyzed.
Abstract: The use of integral field spectroscopy is since recently allowing to measure the emission line fluxes of an increasingly large number of star-forming galaxies, both locally and at high redshift. Many studies have used these fluxes to derive the gas-phase metallicity of the galaxies by applying the so-called strong-line methods. However, the metallicity indicators that these datasets use were empirically calibrated using few direct abundance data points (T_e-based measurements). Furthermore, a precise determination of the prediction intervals of these indicators is commonly lacking in these calibrations. Such limitations might lead to systematic errors in determining the gas-phase metallicity, especially at high redshift, which might have a strong impact on our understanding of the chemical evolution of the Universe. The main goal of this study is to review the most widely used empirical oxygen calibrations, O3N2 and N2, by using new direct abundance measurements. We pay special attention to (1) the expected uncertainty of these calibrations as a function of the index value or abundance derived and (2) the presence of possible systematic offsets. This is possible thanks to the analysis of the most ambitious compilation of T_e-based H II regions to date. This new dataset compiles the Te-based abundances of 603 H II regions extracted from the literature but also includes new measurements from the CALIFA survey. Besides providing new and improved empirical calibrations for the gas abundance, we also present a comparison between our revisited calibrations with a total of 3423 additional CALIFA H II complexes with abundances derived using the ONS calibration from the literature. The combined analysis of T_e-based and ONS abundances allows us to derive their most accurate calibration to date for both the O3N2 and N2 single-ratio indicators, in terms of all statistical significance, quality, and coverage of the parameters space. In particular, we infer that these indicators show shallower abundance dependencies and statistically significant offsets compared to others'. The O3N2 and N2 indicators can be empirically applied to derive oxygen abundances calibrations from either direct abundance determinations with random errors of 0.18 and 0.16, respectively, or from indirect ones (but based on a large amount of data), reaching an average precision of 0.08 and 0.09 dex (random) and 0.02 and 0.08 dex (systematic; compared to the direct estimations), respectively.

Journal ArticleDOI
TL;DR: In this paper, the authors review the geodynamic evolution of the Aegean-Anatolia region and discuss strain localisation there over geological times, and they favour a model where slab retreat is the main driving engine, and successive slab tearing episodes are the main causes of this stepwise strain localization and the inherited heterogeneity of the crust is a major factor for localising detachments.

Journal ArticleDOI
TL;DR: It is observed that, compared with a memory control group, compassion training elicited activity in a neural network including the medial orbitofrontal cortex, putamen, pallidum, and ventral tegmental area--brain regions previously associated with positive affect and affiliation.
Abstract: The development of social emotions such as compassion is crucial for successful social interactions as well as for the maintenance of mental and physical health, especially when confronted with distressing life events. Yet, the neural mechanisms supporting the training of these emotions are poorly understood. To study affective plasticity in healthy adults, we measured functional neural and subjective responses to witnessing the distress of others in a newly developed task (Socio-affective Video Task). Participants' initial empathic responses to the task were accompanied by negative affect and activations in the anterior insula and anterior medial cingulate cortex--a core neural network underlying empathy for pain. Whereas participants reacted with negative affect before training, compassion training increased positive affective experiences, even in response to witnessing others in distress. On the neural level, we observed that, compared with a memory control group, compassion training elicited activity in a neural network including the medial orbitofrontal cortex, putamen, pallidum, and ventral tegmental area--brain regions previously associated with positive affect and affiliation. Taken together, these findings suggest that the deliberate cultivation of compassion offers a new coping strategy that fosters positive affect even when confronted with the distress of others.

Journal ArticleDOI
TL;DR: In this article, the authors studied the 10 pc-long L1495/B213 complex in Taurus to investigate how dense cores have condensed out of the lower density cloud material.
Abstract: Context. Core condensation is a critical step in the star-formation process, but it is still poorly characterized observationally.Aims. We have studied the 10 pc-long L1495/B213 complex in Taurus to investigate how dense cores have condensed out of the lower density cloud material.Methods. We observed L1495/B213 in C18 O(1−0), N2 H+ (1−0), and SO(J N = 32 –21 ) with the 14 m FCRAO telescope, and complemented the data with dust continuum observations using APEX (870 μ m) and IRAM 30 m (1200 μ m).Results. From the N2 H+ emission, we identify 19 dense cores, some starless and some protostellar. They are not distributed uniformly, but tend to cluster with relative separations on the order of 0.25 pc. From the C18 O emission, we identify multiple velocity components in the gas. We have characterized them by fitting Gaussians to the spectra and by studying the distribution of the fits in position–position–velocity space. In this space, the C18 O components appear as velocity-coherent structures, and we identify them automatically using a dedicated algorithm (FIVE: Friends In VElocity). Using this algorithm, we identify 35 filamentary components with typical lengths of 0.5 pc, sonic internal velocity dispersions, and mass-per-unit length close to the stability threshold of isothermal cylinders at 10 K. Core formation seems to have occurred inside the filamentary components via fragmentation, with few fertile components with higher mass-per-unit length being responsible for most cores in the cloud. On large scales, the filamentary components appear grouped into families, which we refer to as bundles.Conclusions. Core formation in L1495/B213 has proceeded by hierarchical fragmentation. The cloud fragmented first into several pc-scale regions. Each of these regions later fragmented into velocity-coherent filaments of about 0.5 pc in length. Finally, a small number of these filaments fragmented quasi-statically and produced the individual dense cores we see today.

Journal ArticleDOI
TL;DR: A framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue, is proposed.
Abstract: As a new distributed computing model, crowdsourcing lets people leverage the crowd's intelligence and wisdom toward solving problems. This article proposes a framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue. The authors briefly review existing quality-control approaches, identify open issues, and look to future research directions. In the Web extra, the authors discuss both design-time and runtime approaches in more detail.

Journal ArticleDOI
TL;DR: The present analysis provides a useful framework to identify priorities for future research in order to achieve more robust risk assessments of nanopesticides.
Abstract: Published literature has been reviewed in order to (a) explore the (potential) applications of nanotechnology in pesticide formulation, (b) identify possible impacts on environmental fate, and (c) analyze the suitability of current exposure assessment procedures to account for the novel properties of nanopesticides within the EU regulatory context. The term nanopesticide covers a wide variety of products and cannot be considered to represent a single category. Many nanoformulations combine several surfactants, polymers, and metal nanoparticles in the nanometer size range. The aims of nanoformulations are generally common to other pesticide formulations, these being to increase the apparent solubility of poorly soluble active ingredients, to release the active ingredient in a slow/targeted manner and/or to protect against premature degradation. Nanoformulations are thus expected to (a) have significant impacts on the fate of active ingredients and/or (b) introduce new ingredients for which the environmenta...

Journal ArticleDOI
TL;DR: This study extends previous models of social cognition and shows that although shared neural networks may underlie emotional understanding in some situations, an additional mechanism subserved by rSMG is needed to avoid biased social judgments in other situations.
Abstract: Humans tend to use the self as a reference point to perceive the world and gain information about other people's mental states. However, applying such a self-referential projection mechanism in situations where it is inappropriate can result in egocentrically biased judgments. To assess egocentricity bias in the emotional domain (EEB), we developed a novel visuo-tactile paradigm assessing the degree to which empathic judgments are biased by one's own emotions if they are incongruent to those of the person we empathize with. A first behavioral experiment confirmed the existence of such EEB, and two independent fMRI experiments revealed that overcoming biased empathic judgments is associated with increased activation in the right supramarginal gyrus (rSMG), in a location distinct from activations in right temporoparietal junction reported in previous social cognition studies. Using temporary disruption of rSMG with repetitive transcranial magnetic stimulation resulted in a substantial increase of EEB, and so did reducing visuo-tactile stimulation time as shown in an additional behavioral experiment. Our findings provide converging evidence from multiple methods and experiments that rSMG is crucial for overcoming emotional egocentricity. Effective connectivity analyses suggest that this may be achieved by early perceptual regulation processes disambiguating proprioceptive first-person information (touch) from exteroceptive third-person information (vision) during incongruency between self- and other-related affective states. Our study extends previous models of social cognition. It shows that although shared neural networks may underlie emotional understanding in some situations, an additional mechanism subserved by rSMG is needed to avoid biased social judgments in other situations.

Journal ArticleDOI
TL;DR: Sulfur-dependent archaea are confined mostly to hot environments, but metal leaching by acidophiles and reduction of sulfate by anaerobic, nonthermophilic methane oxidizers have a potential impact on the environment.
Abstract: Archaea constitute a considerable fraction of the microbial biomass on Earth. Like Bacteria they have evolved a variety of energy metabolisms using organic and/or inorganic electron donors and acceptors, and many of them are able to fix carbon from inorganic sources. Archaea thus play crucial roles in the Earth’s global geochemical cycles and influence greenhouse gas emissions. Methanogenesis and anaerobic methane oxidation are important steps in the carbon cycle; both are performed exclusively by anaerobic archaea. Oxidation of ammonia to nitrite is performed by Thaumarchaeota .T hey represent the only archaeal group that resides in large numbers in the global aerobic terrestrial and marine environments on Earth. Sulfur-dependent archaea are confined mostly to hot environments, but metal leaching by acidophiles and reduction of sulfate by anaerobic, nonthermophilic methane oxidizers have a potential impact on the environment. The metabolisms of a large number of archaea, in particular those dominating the subsurface, remain to be explored.

Journal ArticleDOI
TL;DR: In this article, a converged ab initio calculation of the optical absorption spectra of single-layer, double-layer and bulk MoS was presented, where the authors explicitly include spin-orbit coupling, using the full spinorial Kohn-Sham wave functions as input.
Abstract: We present converged ab initio calculations of the optical absorption spectra of single-layer, double-layer, and bulk MoS${}_{2}$. Both the quasiparticle-energy calculations (on the level of the GW approximation ) and the calculation of the absorption spectra (on the level of the Bethe-Salpeter equation) explicitly include spin-orbit coupling, using the full spinorial Kohn-Sham wave functions as input. Without excitonic effects, the absorption spectra would have the form of a step function, corresponding to the joint density of states of a parabolic band dispersion in two dimensions. This profile is deformed by a pronounced bound excitonic peak below the continuum onset. The peak is split by spin-orbit interaction in the case of single-layer and (mostly) by interlayer interaction in the case of double-layer and bulk MoS${}_{2}$. The resulting absorption spectra are thus very similar in the three cases, but the interpretation of the spectra is different. Differences in the spectra can be seen in the shape of the absorption spectra at 3 eV where the spectra of the single and double layers are dominated by a strongly bound exciton.

Journal ArticleDOI
TL;DR: In this article, direct-bonded monocrystalline multilayers are used for optical interferometry, which exhibit both intrinsically low mechanical loss and high optical quality.
Abstract: Thermally induced fluctuations impose a fundamental limit on precision measurement. In optical interferometry, the current bounds of stability and sensitivity are dictated by the excess mechanical damping of the high-reflectivity coatings that comprise the cavity end mirrors. Over the last decade, the dissipation of these amorphous multilayer reflectors has at best been reduced by a factor of two. Here, we demonstrate a new paradigm in optical coating technology based on directbonded monocrystalline multilayers, which exhibit both intrinsically low mechanical loss and high optical quality.

Journal ArticleDOI
TL;DR: In this article, the authors define the realized systemic risk beta as the total time-varying marginal effect of a firm's Value-at-Risk (VaR) on the system's VaR.
Abstract: We propose the realized systemic risk beta as a measure for financial companies’ contribution to systemic risk given network interdependence between firms’ tail risk exposures. Conditional on statistically pre-identified network spillover effects and market and balance sheet information, we define the realized systemic risk beta as the total time-varying marginal effect of a firm’s Value-at-Risk (VaR) on the system’s VaR. Suitable statistical inference reveals a multitude of relevant risk spillover channels and determines companies’ systemic importance in the U.S. financial system. Our approach can be used to monitor companies’ systemic importance allowing for a transparent macroprudential regulation.

Journal ArticleDOI
TL;DR: A demonstration of such controlled interactions by cavity cooling the center-of-mass motion of an optically trapped submicron particle paves the way for a light–matter interface that can enable room-temperature quantum experiments with mesoscopic mechanical systems.
Abstract: The coupling of a levitated submicron particle and an optical cavity field promises access to a unique parameter regime both for macroscopic quantum experiments and for high-precision force sensing. We report a demonstration of such controlled interactions by cavity cooling the center-of-mass motion of an optically trapped submicron particle. This paves the way for a light–matter interface that can enable room-temperature quantum experiments with mesoscopic mechanical systems.