scispace - formally typeset
Search or ask a question

Showing papers by "ETH Zurich published in 2010"


Journal ArticleDOI
TL;DR: In this paper, the authors provide a synthesis of past research on the role of soil moisture for the climate system, based both on modelling and observational studies, focusing on soil moisture-temperature and soil moistureprecipitation feedbacks, and their possible modifications with climate change.

3,402 citations


Journal ArticleDOI
TL;DR: Standard nomenclature, outlined in this article, should be followed for reporting of results of µCT‐derived bone morphometry and density measurements.
Abstract: Use of high-resolution micro-computed tomography (microCT) imaging to assess trabecular and cortical bone morphology has grown immensely. There are several commercially available microCT systems, each with different approaches to image acquisition, evaluation, and reporting of outcomes. This lack of consistency makes it difficult to interpret reported results and to compare findings across different studies. This article addresses this critical need for standardized terminology and consistent reporting of parameters related to image acquisition and analysis, and key outcome assessments, particularly with respect to ex vivo analysis of rodent specimens. Thus the guidelines herein provide recommendations regarding (1) standardized terminology and units, (2) information to be included in describing the methods for a given experiment, and (3) a minimal set of outcome variables that should be reported. Whereas the specific research objective will determine the experimental design, these guidelines are intended to ensure accurate and consistent reporting of microCT-derived bone morphometry and density measurements. In particular, the methods section for papers that present microCT-based outcomes must include details of the following scan aspects: (1) image acquisition, including the scanning medium, X-ray tube potential, and voxel size, as well as clear descriptions of the size and location of the volume of interest and the method used to delineate trabecular and cortical bone regions, and (2) image processing, including the algorithms used for image filtration and the approach used for image segmentation. Morphometric analyses should be based on 3D algorithms that do not rely on assumptions about the underlying structure whenever possible. When reporting microCT results, the minimal set of variables that should be used to describe trabecular bone morphometry includes bone volume fraction and trabecular number, thickness, and separation. The minimal set of variables that should be used to describe cortical bone morphometry includes total cross-sectional area, cortical bone area, cortical bone area fraction, and cortical thickness. Other variables also may be appropriate depending on the research question and technical quality of the scan. Standard nomenclature, outlined in this article, should be followed for reporting of results.

3,298 citations


Journal ArticleDOI
22 Jul 2010-Nature
TL;DR: Cai et al. as discussed by the authors used a surface-assisted coupling of the precursors into linear polyphenylenes and their subsequent cyclodehydrogenation to produce GNRs of different topologies and widths.
Abstract: Graphene nanoribbons, narrow straight-edged strips of the single-atom-thick sheet form of carbon, are predicted to exhibit remarkable properties, making them suitable for future electronic applications. Before this potential can be realized, more chemically precise methods of production will be required. Cai et al. report a step towards that goal with the development of a bottom-up fabrication method that produces atomically precise graphene nanoribbons of different topologies and widths. The process involves the deposition of precursor monomers with structures that 'encode' the topology and width of the desired ribbon end-product onto a metal surface. Surface-assisted coupling of the precursors into linear polyphenylenes is then followed by cyclodehydrogenation. Given the method's versatility and precision, it could even provide a route to more unusual graphene nanoribbon structures with tuned chemical and electronic properties. Graphene nanoribbons (GNRs) have structure-dependent electronic properties that make them attractive for the fabrication of nanoscale electronic devices, but exploiting this potential has been hindered by the lack of precise production methods. Here the authors demonstrate how to reliably produce different GNRs, using precursor monomers that encode the structure of the targeted nanoribbon and are converted into GNRs by means of surface-assisted coupling. Graphene nanoribbons—narrow and straight-edged stripes of graphene, or single-layer graphite—are predicted to exhibit electronic properties that make them attractive for the fabrication of nanoscale electronic devices1,2,3. In particular, although the two-dimensional parent material graphene4,5 exhibits semimetallic behaviour, quantum confinement and edge effects2,6 should render all graphene nanoribbons with widths smaller than 10 nm semiconducting. But exploring the potential of graphene nanoribbons is hampered by their limited availability: although they have been made using chemical7,8,9, sonochemical10 and lithographic11,12 methods as well as through the unzipping of carbon nanotubes13,14,15,16, the reliable production of graphene nanoribbons smaller than 10 nm with chemical precision remains a significant challenge. Here we report a simple method for the production of atomically precise graphene nanoribbons of different topologies and widths, which uses surface-assisted coupling17,18 of molecular precursors into linear polyphenylenes and their subsequent cyclodehydrogenation19,20. The topology, width and edge periphery of the graphene nanoribbon products are defined by the structure of the precursor monomers, which can be designed to give access to a wide range of different graphene nanoribbons. We expect that our bottom-up approach to the atomically precise fabrication of graphene nanoribbons will finally enable detailed experimental investigations of the properties of this exciting class of materials. It should even provide a route to graphene nanoribbon structures with engineered chemical and electronic properties, including the theoretically predicted intraribbon quantum dots21, superlattice structures22 and magnetic devices based on specific graphene nanoribbon edge states3.

2,905 citations


Journal ArticleDOI
Koji Nakamura1, K. Hagiwara, Ken Ichi Hikasa2, Hitoshi Murayama3  +180 moreInstitutions (92)
TL;DR: In this article, a biennial review summarizes much of particle physics using data from previous editions, plus 2158 new measurements from 551 papers, they list, evaluate and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons.
Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology.

2,788 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore the simple interrelationships between mass, star formation rate, and environment in the SDSS, zCOSMOS, and other deep surveys.
Abstract: We explore the simple inter-relationships between mass, star formation rate, and environment in the SDSS, zCOSMOS, and other deep surveys. We take a purely empirical approach in identifying those features of galaxy evolution that are demanded by the data and then explore the analytic consequences of these. We show that the differential effects of mass and environment are completely separable to z ~ 1, leading to the idea of two distinct processes of "mass quenching" and "environment quenching." The effect of environment quenching, at fixed over-density, evidently does not change with epoch to z ~ 1 in zCOSMOS, suggesting that the environment quenching occurs as large-scale structure develops in the universe, probably through the cessation of star formation in 30%-70% of satellite galaxies. In contrast, mass quenching appears to be a more dynamic process, governed by a quenching rate. We show that the observed constancy of the Schechter M* and α_s for star-forming galaxies demands that the quenching of galaxies around and above M* must follow a rate that is statistically proportional to their star formation rates (or closely mimic such a dependence). We then postulate that this simple mass-quenching law in fact holds over a much broader range of stellar mass (2 dex) and cosmic time. We show that the combination of these two quenching processes, plus some additional quenching due to merging naturally produces (1) a quasi-static single Schechter mass function for star-forming galaxies with an exponential cutoff at a value M* that is set uniquely by the constant of proportionality between the star formation and mass quenching rates and (2) a double Schechter function for passive galaxies with two components. The dominant component (at high masses) is produced by mass quenching and has exactly the same M* as the star-forming galaxies but a faint end slope that differs by Δα_s ~ 1. The other component is produced by environment effects and has the same M* and α_s as the star-forming galaxies but an amplitude that is strongly dependent on environment. Subsequent merging of quenched galaxies will modify these predictions somewhat in the denser environments, mildly increasing M* and making α_s slightly more negative. All of these detailed quantitative inter-relationships between the Schechter parameters of the star-forming and passive galaxies, across a broad range of environments, are indeed seen to high accuracy in the SDSS, lending strong support to our simple empirically based model. We find that the amount of post-quenching "dry merging" that could have occurred is quite constrained. Our model gives a prediction for the mass function of the population of transitory objects that are in the process of being quenched. Our simple empirical laws for the cessation of star formation in galaxies also naturally produce the "anti-hierarchical" run of mean age with mass for passive galaxies, as well as the qualitative variation of formation timescale indicated by the relative α-element abundances.

1,860 citations


Journal ArticleDOI
12 Nov 2010-Science
TL;DR: It is shown that Andean uplift was crucial for the evolution of Amazonian landscapes and ecosystems, and that current biodiversity patterns are rooted deep in the pre-Quaternary.
Abstract: The Amazonian rainforest is arguably the most species-rich terrestrial ecosystem in the world, yet the timing of the origin and evolutionary causes of this diversity are a matter of debate. We review the geologic and phylogenetic evidence from Amazonia and compare it with uplift records from the Andes. This uplift and its effect on regional climate fundamentally changed the Amazonian landscape by reconfiguring drainage patterns and creating a vast influx of sediments into the basin. On this “Andean” substrate, a region-wide edaphic mosaic developed that became extremely rich in species, particularly in Western Amazonia. We show that Andean uplift was crucial for the evolution of Amazonian landscapes and ecosystems, and that current biodiversity patterns are rooted deep in the pre-Quaternary.

1,790 citations


Journal ArticleDOI
TL;DR: The aim of this review is to provide a comprehensive survey of the technological state of the art in medical microrobots, to explore the potential impact of medical micRORobots and inspire future research in this field.
Abstract: Microrobots have the potential to revolutionize many aspects of medicine. These untethered, wirelessly controlled and powered devices will make existing therapeutic and diagnostic procedures less invasive and will enable new procedures never before possible. The aim of this review is threefold: first, to provide a comprehensive survey of the technological state of the art in medical microrobots; second, to explore the potential impact of medical microrobots and inspire future research in this field; and third, to provide a collection of valuable information and engineering tools for the design of medical microrobots.

1,580 citations


Journal ArticleDOI
TL;DR: In this paper, the main groups of aquatic contaminants, their effects on human health, and approaches to mitigate pollution of freshwater resources are reviewed, particularly on inorganic and organic micropollutants including toxic metals and metalloids as well as a large variety of synthetic organic chemicals.
Abstract: Water quality issues are a major challenge that humanity is facing in the twenty-first century. Here, we review the main groups of aquatic contaminants, their effects on human health, and approaches to mitigate pollution of freshwater resources. Emphasis is placed on chemical pollution, particularly on inorganic and organic micropollutants including toxic metals and metalloids as well as a large variety of synthetic organic chemicals. Some aspects of waterborne diseases and the urgent need for improved sanitation in developing countries are also discussed. The review addresses current scientific advances to cope with the great diversity of pollutants. It is organized along the different temporal and spatial scales of global water pollution. Persistent organic pollutants (POPs) have affected water systems on a global scale for more than five decades; during that time geogenic pollutants, mining operations, and hazardous waste sites have been the most relevant sources of long-term regional and local water pollution. Agricultural chemicals and wastewater sources exert shorter-term effects on regional to local scales.

1,407 citations


Book
Jean-Raymond Abrial1
01 May 2010
TL;DR: This book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B, which allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs.
Abstract: A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.

1,359 citations


Journal ArticleDOI
TL;DR: All tissues and organs were reconstructed as three-dimensional unstructured triangulated surface objects, yielding high precision images of individual features of the body, which greatly enhances the meshing flexibility and the accuracy in comparison with the traditional voxel-based representation of anatomical models.
Abstract: The objective of this study was to develop anatomically correct whole body human models of an adult male (34 years old), an adult female (26 years old) and two children (an 11-year-old girl and a six-year-old boy) for the optimized evaluation of electromagnetic exposure. These four models are referred to as the Virtual Family. They are based on high resolution magnetic resonance (MR) images of healthy volunteers. More than 80 different tissue types were distinguished during the segmentation. To improve the accuracy and the effectiveness of the segmentation, a novel semi-automated tool was used to analyze and segment the data. All tissues and organs were reconstructed as three-dimensional (3D) unstructured triangulated surface objects, yielding high precision images of individual features of the body. This greatly enhances the meshing flexibility and the accuracy with respect to thin tissue layers and small organs in comparison with the traditional voxel-based representation of anatomical models. Conformal computational techniques were also applied. The techniques and tools developed in this study can be used to more effectively develop future models and further improve the accuracy of the models for various applications. For research purposes, the four models are provided for free to the scientific community.

1,347 citations


Journal ArticleDOI
TL;DR: The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Abstract: We consider efficient methods for the recovery of block-sparse signals-ie, sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce We then show that a block-version of the orthogonal matching pursuit algorithm recovers block -sparse signals in no more than steps if the block-coherence is sufficiently small The same condition on block-coherence is shown to guarantee successful recovery through a mixed -optimization approach This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem

Journal ArticleDOI
TL;DR: The potential for larger O2 declines in the future suggests the need for an improved observing system for tracking ocean 02 changes, and an important consequence may be an expansion in the area and volume of so-called oxygen minimum zones.
Abstract: Ocean warming and increased stratification of the upper ocean caused by global climate change will likely lead to declines in dissolved O2 in the ocean interior (ocean deoxygenation) with implications for ocean productivity, nutrient cycling, carbon cycling, and marine habitat. Ocean models predict declines of 1 to 7% in the global ocean O2 inventory over the next century, with declines continuing for a thousand years or more into the future. An important consequence may be an expansion in the area and volume of so-called oxygen minimum zones, where O2 levels are too low to support many macrofauna and profound changes in biogeochemical cycling occur. Significant deoxygenation has occurred over the past 50 years in the North Pacific and tropical oceans, suggesting larger changes are looming. The potential for larger O2 declines in the future suggests the need for an improved observing system for tracking ocean O2 changes.

Journal ArticleDOI
24 Dec 2010-Science
TL;DR: By using a solar cavity-receiver reactor, the oxygen uptake and release capacity of cerium oxide and facile catalysis at elevated temperatures to thermochemically dissociate CO2 and H2O, yielding CO andH2, respectively were combined and stable and rapid generation of fuel was demonstrated over 500 cycles.
Abstract: Because solar energy is available in large excess relative to current rates of energy consumption, effective conversion of this renewable yet intermittent resource into a transportable and dispatchable chemical fuel may ensure the goal of a sustainable energy future. However, low conversion efficiencies, particularly with CO_2 reduction, as well as utilization of precious materials have limited the practical generation of solar fuels. By using a solar cavity-receiver reactor, we combined the oxygen uptake and release capacity of cerium oxide and facile catalysis at elevated temperatures to thermochemically dissociate CO_2 and H_2O, yielding CO and H_2, respectively. Stable and rapid generation of fuel was demonstrated over 500 cycles. Solar-to-fuel efficiencies of 0.7 to 0.8% were achieved and shown to be largely limited by the system scale and design rather than by chemistry.

Proceedings ArticleDOI
23 Aug 2010
TL;DR: It is shown that both problems can be overcome by replacing the conventional point estimate of accuracy by an estimate of the posterior distribution of the balanced accuracy.
Abstract: Evaluating the performance of a classification algorithm critically requires a measure of the degree to which unseen examples have been identified with their correct class labels. In practice, generalizability is frequently estimated by averaging the accuracies obtained on individual cross-validation folds. This procedure, however, is problematic in two ways. First, it does not allow for the derivation of meaningful confidence intervals. Second, it leads to an optimistic estimate when a biased classifier is tested on an imbalanced dataset. We show that both problems can be overcome by replacing the conventional point estimate of accuracy by an estimate of the posterior distribution of the balanced accuracy.

Journal ArticleDOI
08 Jul 2010-Nature
TL;DR: The root-mean-square charge radius, rp, has been determined with an accuracy of 2 per cent by electron–proton scattering experiments, and the present most accurate value of rp (with an uncertainty of 1 per cent) is given by the CODATA compilation of physical constants.
Abstract: Considering that the proton is a basic subatomic component of all ordinary matter — as well as being ubiquitous in its solo role as the hydrogen ion H+ — there are some surprising gaps in our knowledge of its structure and behaviour. A collaborative project to determine the root-mean-square charge radius of the proton to better than the 1% accuracy of the current 'best' value suggests that those knowledge gaps may be greater than was thought. The new determination comes from a technically challenging spectroscopic experiment — the measurement of the Lamb shift (the energy difference between a specific pair of energy states) in 'muonic hydrogen', an exotic atom in which the electron is replaced by its heavier twin, the muon. The result is unexpected: a charge radius about 4% smaller than the previous value. The discrepancy remains unexplained. Possible implications are that the value of the most accurately determined fundamental constant, the Rydberg constant, will need to be revised — or that the validity of quantum electrodynamics theory is called into question. Here, a technically challenging spectroscopic experiment is described: the measurement of the muonic Lamb shift. The results lead to a new determination of the charge radius of the proton. The new value is 5.0 standard deviations smaller than the previous world average, a large discrepancy that remains unexplained. Possible implications of the new finding are that the value of the Rydberg constant will need to be revised, or that the validity of quantum electrodynamics theory is called into question. The proton is the primary building block of the visible Universe, but many of its properties—such as its charge radius and its anomalous magnetic moment—are not well understood. The root-mean-square charge radius, rp, has been determined with an accuracy of 2 per cent (at best) by electron–proton scattering experiments1,2. The present most accurate value of rp (with an uncertainty of 1 per cent) is given by the CODATA compilation of physical constants3. This value is based mainly on precision spectroscopy of atomic hydrogen4,5,6,7 and calculations of bound-state quantum electrodynamics (QED; refs 8, 9). The accuracy of rp as deduced from electron–proton scattering limits the testing of bound-state QED in atomic hydrogen as well as the determination of the Rydberg constant (currently the most accurately measured fundamental physical constant3). An attractive means to improve the accuracy in the measurement of rp is provided by muonic hydrogen (a proton orbited by a negative muon); its much smaller Bohr radius compared to ordinary atomic hydrogen causes enhancement of effects related to the finite size of the proton. In particular, the Lamb shift10 (the energy difference between the 2S1/2 and 2P1/2 states) is affected by as much as 2 per cent. Here we use pulsed laser spectroscopy to measure a muonic Lamb shift of 49,881.88(76) GHz. On the basis of present calculations11,12,13,14,15 of fine and hyperfine splittings and QED terms, we find rp = 0.84184(67) fm, which differs by 5.0 standard deviations from the CODATA value3 of 0.8768(69) fm. Our result implies that either the Rydberg constant has to be shifted by −110 kHz/c (4.9 standard deviations), or the calculations of the QED effects in atomic hydrogen or muonic hydrogen atoms are insufficient.

Journal ArticleDOI
29 Apr 2010-Nature
TL;DR: In this paper, the Dicke phase transition in an open system formed by a Bose-Einstein condensate coupled to an optical cavity has been realized, and the phase transition is driven by infinitely long-range interactions between the condensed atoms, induced by two-photon processes involving the cavity mode and a pump field.
Abstract: A phase transition describes the sudden change of state of a physical system, such as melting or freezing. Quantum gases provide the opportunity to establish a direct link between experiments and generic models that capture the underlying physics. The Dicke model describes a collective matter-light interaction and has been predicted to show an intriguing quantum phase transition. Here we realize the Dicke quantum phase transition in an open system formed by a Bose-Einstein condensate coupled to an optical cavity, and observe the emergence of a self-organized supersolid phase. The phase transition is driven by infinitely long-range interactions between the condensed atoms, induced by two-photon processes involving the cavity mode and a pump field. We show that the phase transition is described by the Dicke Hamiltonian, including counter-rotating coupling terms, and that the supersolid phase is associated with a spontaneously broken spatial symmetry. The boundary of the phase transition is mapped out in quantitative agreement with the Dicke model. Our results should facilitate studies of quantum gases with long-range interactions and provide access to novel quantum phases.

Journal ArticleDOI
TL;DR: While the multimodel average appears to still be useful in some situations, the results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.
Abstract: Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal— for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate ‘‘good’’ and ‘‘bad’’ models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.

Book ChapterDOI
05 Sep 2010
TL;DR: A novel method for unsupervised class segmentation on a set of images that alternates between segmenting object instances and learning a class model based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation.
Abstract: We propose a novel method for unsupervised class segmentation on a set of images. It alternates between segmenting object instances and learning a class model. The method is based on a segmentation energy defined over all images at the same time, which can be optimized efficiently by techniques used before in interactive segmentation. Over iterations, our method progressively learns a class model by integrating observations over all images. In addition to appearance, this model captures the location and shape of the class with respect to an automatically determined coordinate frame common across images. This frame allows us to build stronger shape and location models, similar to those used in object class detection. Our method is inspired by interactive segmentation methods [1], but it is fully automatic and learns models characteristic for the object class rather than specific to one particular object/image. We experimentally demonstrate on the Caltech4, Caltech101, and Weizmann horses datasets that our method (a) transfers class knowledge across images and this improves results compared to segmenting every image independently; (b) outperforms Grabcut [1] for the task of unsupervised segmentation; (c) offers competitive performance compared to the state-of-the-art in unsupervised segmentation and in particular it outperforms the topic model [2].

Journal ArticleDOI
TL;DR: It is suggested that changes in species diversity within and across trophic levels can significantly alter decomposition and this happens through various mechanisms that are broadly similar in forest floors and streams.
Abstract: Over 100 gigatons of terrestrial plant biomass are produced globally each year. Ninety percent of this biomass escapes herbivory and enters the dead organic matter pool, thus supporting complex detritus-based food webs that determine the critical balance between carbon mineralization and sequestration. How will changes in biodiversity affect this vital component of ecosystem functioning? Based on our analysis of concepts and experiments of leaf decomposition in forest floors and streams, we suggest that changes in species diversity within and across trophic levels can significantly alter decomposition. This happens through various mechanisms that are broadly similar in forest floors and streams. Differences in diversity effects between these systems relate to divergent habitat conditions and evolutionary trajectories of aquatic and terrestrial decomposers.

Journal ArticleDOI
Abstract: The eleventh generation of the International Geomagnetic Reference Field (IGRF) was adopted in December 2009 by the International Association of Geomagnetism and Aeronomy Working Group V-MOD. It updates the previous IGRF generation with a definitive main field model for epoch 2005.0, a main field model for epoch 2010.0, and a linear predictive secular variation model for 2010.0–2015.0. In this note the equations defining the IGRF model are provided along with the spherical harmonic coefficients for the eleventh generation. Maps of the magnetic declination, inclination and total intensity for epoch 2010.0 and their predicted rates of change for 2010.0–2015.0 are presented. The recent evolution of the South Atlantic Anomaly and magnetic pole positions are also examined.

Journal ArticleDOI
TL;DR: In this article, a review of reports of stellar initial mass function variations is presented, with a view toward whether other explanations are sufficient given the evidence, concluding that the vast majority were drawn from a universal system IMF: a power law of Salpeter index (Γ = 1.35) above a few solar masses, and a log normal or shallower power law (∆ ∼ 0.25) for lower mass stars.
Abstract: Whether the stellar initial mass function (IMF) is universal or is instead sensitive to environmental conditions is of critical importance: The IMF influences most observable properties of stellar populations and thus galaxies, and detecting variations in the IMF could provide deep insights into the star formation process. This review critically examines reports of IMF variations, with a view toward whether other explanations are sufficient given the evidence. Studies of the field, young clusters and associations, and old globular clusters suggest that the vast majority were drawn from a universal system IMF: a power law of Salpeter index (Γ = 1.35) above a few solar masses, and a log normal or shallower power law (Γ ∼ 0–0.25) for lower mass stars. The shape and universality of the substellar IMF is still under investigation. Observations of resolved stellar populations and the integrated properties of most galaxies are also consistent with a universal IMF, suggesting no gross variations over much of cosm...

Journal ArticleDOI
TL;DR: It is shown that chronic and unpredictable maternal separation induces depressive-like behaviors and alters the behavioral response to aversive environments in the separated animals when adult, highlighting the negative impact of early stress on behavioral responses across generations and on the regulation of DNA methylation in the germline.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A generic objectness measure, quantifying how likely it is for an image window to contain an object of any class, is presented, combining in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary.
Abstract: We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. This includes an innovative cue measuring the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure [17], and the combined measure to perform better than any cue alone. Finally, we show how to sample windows from an image according to their objectness distribution and give an algorithm to employ them as location priors for modern class-specific object detectors. In experiments on PASCAL VOC 07 we show this greatly reduces the number of windows evaluated by class-specific object detectors.

Journal ArticleDOI
TL;DR: This article is not meant to give an exhaustive overview of all nanomaterials synthesized by the microwave technique, but to discuss the new opportunities that arise as a result of the unique features of microwave chemistry.
Abstract: This Feature Article gives an overview of microwave-assisted liquid phase routes to inorganic nanomaterials. Whereas microwave chemistry is a well-established technique in organic synthesis, its use in inorganic nanomaterials' synthesis is still at the beginning and far away from having reached its full potential. However, the rapidly growing number of publications in this field suggests that microwave chemistry will play an outstanding role in the broad field of Nanoscience and Nanotechnology. This article is not meant to give an exhaustive overview of all nanomaterials synthesized by the microwave technique, but to discuss the new opportunities that arise as a result of the unique features of microwave chemistry. Principles, advantages and limitations of microwave chemistry are introduced, its application in the synthesis of different classes of functional nanomaterials is discussed, and finally expected benefits for nanomaterials' synthesis are elaborated.

Journal ArticleDOI
TL;DR: Progress in identifying the enzymatic machinery required for the synthesis of amylopectin, the glucose polymer responsible for the insoluble nature of starch, is assessed.
Abstract: Starch is the most widespread and abundant storage carbohydrate in plants. We depend upon starch for our nutrition, exploit its unique properties in industry, and use it as a feedstock for bioethanol production. Here, we review recent advances in research in three key areas. First, we assess progress in identifying the enzymatic machinery required for the synthesis of amylopectin, the glucose polymer responsible for the insoluble nature of starch. Second, we discuss the pathways of starch degradation, focusing on the emerging role of transient glucan phosphorylation in plastids as a mechanism for solubilizing the surface of the starch granule. We contrast this pathway in leaves with the degradation of starch in the endosperm of germinated cereal seeds. Third, we consider the evolution of starch biosynthesis in plants from the ancestral ability to make glycogen. Finally, we discuss how this basic knowledge has been utilized to improve and diversify starch crops.

Journal ArticleDOI
TL;DR: GENIE as mentioned in this paper is a large-scale software system, consisting of ∼ 120 000 lines of C++ code, featuring a modern object-oriented design and extensively validated physics content, which supports the full life-cycle of simulation and generator-related analysis tasks.
Abstract: GENIE [1] is a new neutrino event generator for the experimental neutrino physics community. The goal of the project is to develop a ‘canonical’ neutrino interaction physics Monte Carlo whose validity extends to all nuclear targets and neutrino flavors from MeV to PeV energy scales. Currently, emphasis is on the few-GeV energy range, the challenging boundary between the non-perturbative and perturbative regimes, which is relevant for the current and near future long-baseline precision neutrino experiments using accelerator-made beams. The design of the package addresses many challenges unique to neutrino simulations and supports the full life-cycle of simulation and generator-related analysis tasks. GENIE is a large-scale software system, consisting of ∼ 120 000 lines of C ++ code, featuring a modern object-oriented design and extensively validated physics content. The first official physics release of GENIE was made available in August 2007, and at the time of the writing of this article, the latest available version was v2.4.4.

Journal ArticleDOI
TL;DR: This review focuses on the cell biology of virus entry and the different strategies and endocytic mechanisms used by animal viruses.
Abstract: Although viruses are simple in structure and composition, their interactions with host cells are complex. Merely to gain entry, animal viruses make use of a repertoire of cellular processes that involve hundreds of cellular proteins. Although some viruses have the capacity to penetrate into the cytosol directly through the plasma membrane, most depend on endocytic uptake, vesicular transport through the cytoplasm, and delivery to endosomes and other intracellular organelles. The internalization may involve clathrin-mediated endocytosis (CME), macropinocytosis, caveolar/lipid raft-mediated endocytosis, or a variety of other still poorly characterized mechanisms. This review focuses on the cell biology of virus entry and the different strategies and endocytic mechanisms used by animal viruses.

Journal ArticleDOI
TL;DR: This article analyzed outbreaks of armed conflict as the result of competing ethnonationalist claims to state power and found that representatives of ethnic groups are more likely to initiate conflict with the government, especially if they have recently lost power, the higher their mobilizational capacity, and the more they have experienced conflict in the past.
Abstract: Much of the quantitative literature on civil wars and ethnic conflict ignores the role of the state or treats it as a mere arena for political competition among ethnic groups. other studies analyze how the state grants or withholds minority rights and faces ethnic protest and rebellion accordingly, while largely overlooking the ethnic power configurations at the state's center. drawing on a new data set on ethnic power relations (EPR ) that identifies all politically relevant ethnic groups and their access to central state power around the world from 1946 through 2005, the authors analyze outbreaks of armed conflict as the result of competing ethnonationalist claims to state power. the findings indicate that representatives of ethnic groups are more likely to initiate conflict with the government (1) the more excluded from state power they are, especially if they have recently lost power, (2) the higher their mobilizational capacity, and (3) the more they have experienced conflict in the past.

Journal ArticleDOI
24 Jun 2010-Nature
TL;DR: Maximum-likelihood methods are used to test for Lévy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose and results are consistent with the LÉvy-flight foraging hypothesis, supporting the contention that organism search strategies naturally evolved in such a way that they exploit optimal Lé Ivy patterns.
Abstract: An optimal search theory, the so-called Levy-flight foraging hypothesis1, predicts that predators should adopt search strategies known as Levy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey2, 3, 4. Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Levy behaviour has recently been questioned5, 6. Consequently, whether foragers exhibit Levy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory’s central predictions. Here we use maximum-likelihood methods to test for Levy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Levy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Levy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Levy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Levy-flight foraging hypothesis1, 7, supporting the contention8, 9 that organism search strategies naturally evolved in such a way that they exploit optimal Levy patterns.

Journal ArticleDOI
TL;DR: In this paper, a set of high-resolution regional climate simulations reveals consistent geographical patterns in these changes, with the most severe health impacts in southern European river basins and along the Mediterranean coasts.
Abstract: Climate-change projections suggest that European summer heatwaves will become more frequent and severe during this century. An analysis of a set of high-resolution regional climate simulations reveals consistent geographical patterns in these changes, with the most severe health impacts in southern European river basins and along the Mediterranean coasts. Climate-change projections suggest that European summer heatwaves will become more frequent and severe during this century1,2,3,4, consistent with the observed trend of the past decades5,6. The most severe impacts arise from multi-day heatwaves, associated with warm night-time temperatures and high relative humidity. Here we analyse a set of high-resolution regional climate simulations and show that there is a geographically consistent pattern among climate models: we project the most pronounced changes to occur in southernmost Europe for heatwave frequency and duration, further north for heatwave amplitude and in low-altitude southern European regions for health-related indicators. For the Iberian peninsula and the Mediterranean region, the frequency of heatwave days is projected to increase from an average of about two days per summer for the period 1961–1990 to around 13 days for 2021–2050 and 40 days for 2071–2100. In terms of health impacts, our projections are most severe for low-altitude river basins in southern Europe and for the Mediterranean coasts, affecting many densely populated urban centres. We find that in these locations, the frequency of dangerous heat conditions also increases significantly faster and more strongly, and that the associated geographical pattern is robust across different models and health indicators.