scispace - formally typeset
Search or ask a question

Showing papers by "Karlsruhe Institute of Technology published in 2016"


Journal ArticleDOI
18 Oct 2016-PeerJ
TL;DR: VSEARCH is here shown to be more accurate than USEARCH when performing searching, clustering, chimera detection and subsampling, while on a par with US EARCH for paired-ends read merging and dereplication.
Abstract: Background: VSEARCH is an open source and free of charge multithreaded 64-bit tool for processing and preparing metagenomics, genomics and population genomics nucleotide sequence data. It is designed as an alternative to the widely used USEARCH tool (Edgar, 2010) for which the source code is not publicly available, algorithm details are only rudimentarily described, and only a memory-confined 32-bit version is freely available for academic use. Methods: When searching nucleotide sequences, VSEARCH uses a fast heuristic based on words shared by the query and target sequences in order to quickly identify similar sequences, a similar strategy is probably used in USEARCH. VSEARCH then performs optimal global sequence alignment of the query against potential target sequences, using full dynamic programming instead of the seed-and-extend heuristic used by USEARCH. Pairwise alignments are computed in parallel using vectorisation and multiple threads. Results: VSEARCH includes most commands for analysing nucleotide sequences available in USEARCH version 7 and several of those available in USEARCH version 8, including searching (exact or based on global alignment), clustering by similarity (using length pre-sorting, abundance pre-sorting or a user-defined order), chimera detection (reference-based or de novo), dereplication (full length or prefix), pairwise alignment, reverse complementation, sorting, and subsampling. VSEARCH also includes commands for FASTQ file processing, i.e., format detection, filtering, read quality statistics, and merging of paired reads. Furthermore, VSEARCH extends functionality with several new commands and improvements, including shuffling, rereplication, masking of low-complexity sequences with the well-known DUST algorithm, a choice among different similarity definitions, and FASTQ file format conversion. VSEARCH is here shown to be more accurate than USEARCH when performing searching, clustering, chimera detection and subsampling, while on a par with USEARCH for paired-ends read merging. VSEARCH is slower than USEARCH when performing clustering and chimera detection, but significantly faster when performing paired-end reads merging and dereplication. VSEARCH is available at https://github.com/torognes/vsearch under either the BSD 2-clause license or the GNU General Public License version 3.0. Discussion: VSEARCH has been shown to be a fast, accurate and full-fledged alternative to USEARCH. A free and open-source versatile tool for sequence analysis is now available to the metagenomics community.

5,850 citations


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: In this article, solid-state batteries have recently attracted great interest as potentially safe and stable high-energy storage systems, but key issues remain unsolved, hindering full-scale commercialization.
Abstract: Solid-state batteries have recently attracted great interest as potentially safe and stable high-energy storage systems. However, key issues remain unsolved, hindering full-scale commercialization.

2,071 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared the available electrolysis and methanation technologies with respect to the stringent requirements of the power-to-gas (PtG) chain such as low CAPEX, high efficiency, and high flexibility.

1,841 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009.
Abstract: Global environmental change is rapidly altering the dynamics of terrestrial vegetation, with consequences for the functioning of the Earth system and provision of ecosystem services(1,2). Yet how global vegetation is responding to the changing environment is not well established. Here we use three long-term satellite leaf area index (LAI) records and ten global ecosystem models to investigate four key drivers of LAI trends during 1982-2009. We show a persistent and widespread increase of growing season integrated LAI (greening) over 25% to 50% of the global vegetated area, whereas less than 4% of the globe shows decreasing LAI (browning). Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend, followed by nitrogen deposition (9%), climate change (8%) and land cover change (LCC) (4%). CO2 fertilization effects explain most of the greening trends in the tropics, whereas climate change resulted in greening of the high latitudes and the Tibetan Plateau. LCC contributed most to the regional greening observed in southeast China and the eastern United States. The regional effects of unexplained factors suggest that the next generation of ecosystem models will need to explore the impacts of forest demography, differences in regional management intensities for cropland and pastures, and other emerging productivity constraints such as phosphorus availability.

1,534 citations


Journal ArticleDOI
University of East Anglia1, University of Oslo2, Commonwealth Scientific and Industrial Research Organisation3, University of Exeter4, Oak Ridge National Laboratory5, National Oceanic and Atmospheric Administration6, Woods Hole Research Center7, University of California, San Diego8, Karlsruhe Institute of Technology9, Cooperative Institute for Marine and Atmospheric Studies10, Centre national de la recherche scientifique11, University of Maryland, College Park12, National Institute of Water and Atmospheric Research13, Woods Hole Oceanographic Institution14, Flanders Marine Institute15, Alfred Wegener Institute for Polar and Marine Research16, Netherlands Environmental Assessment Agency17, University of Illinois at Urbana–Champaign18, Leibniz Institute of Marine Sciences19, Max Planck Society20, University of Paris21, Hobart Corporation22, University of Bern23, Oeschger Centre for Climate Change Research24, National Center for Atmospheric Research25, University of Miami26, Council of Scientific and Industrial Research27, University of Colorado Boulder28, National Institute for Environmental Studies29, Joint Institute for the Study of the Atmosphere and Ocean30, Geophysical Institute, University of Bergen31, Goddard Space Flight Center32, Montana State University33, University of New Hampshire34, Bjerknes Centre for Climate Research35, Imperial College London36, Lamont–Doherty Earth Observatory37, Auburn University38, Wageningen University and Research Centre39, VU University Amsterdam40, Met Office41
TL;DR: In this article, the authors quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community.
Abstract: . Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates and consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models. We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2006–2015), EFF was 9.3 ± 0.5 GtC yr−1, ELUC 1.0 ± 0.5 GtC yr−1, GATM 4.5 ± 0.1 GtC yr−1, SOCEAN 2.6 ± 0.5 GtC yr−1, and SLAND 3.1 ± 0.9 GtC yr−1. For year 2015 alone, the growth in EFF was approximately zero and emissions remained at 9.9 ± 0.5 GtC yr−1, showing a slowdown in growth of these emissions compared to the average growth of 1.8 % yr−1 that took place during 2006–2015. Also, for 2015, ELUC was 1.3 ± 0.5 GtC yr−1, GATM was 6.3 ± 0.2 GtC yr−1, SOCEAN was 3.0 ± 0.5 GtC yr−1, and SLAND was 1.9 ± 0.9 GtC yr−1. GATM was higher in 2015 compared to the past decade (2006–2015), reflecting a smaller SLAND for that year. The global atmospheric CO2 concentration reached 399.4 ± 0.1 ppm averaged over 2015. For 2016, preliminary data indicate the continuation of low growth in EFF with +0.2 % (range of −1.0 to +1.8 %) based on national emissions projections for China and USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. In spite of the low growth of EFF in 2016, the growth rate in atmospheric CO2 concentration is expected to be relatively high because of the persistence of the smaller residual terrestrial sink (SLAND) in response to El Nino conditions of 2015–2016. From this projection of EFF and assumed constant ELUC for 2016, cumulative emissions of CO2 will reach 565 ± 55 GtC (2075 ± 205 GtCO2) for 1870–2016, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quere et al., 2015b, a, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( doi:10.3334/CDIAC/GCP_2016 ).

1,224 citations


Journal ArticleDOI
15 Feb 2016-Fuel
TL;DR: A comprehensive overview of methanation research conducted during the last century is presented in this paper, where application-oriented research focusing on reactor developments, reactor modeling, and pilot plant investigation is reviewed.

973 citations


Book ChapterDOI
01 Jan 2016
TL;DR: In this article, the authors survey recent advances in algorithms for route planning in transportation networks, and show that one can compute driving directions in milliseconds or less even at continental scale for road networks, while others can deal efficiently with real-time traffic.
Abstract: We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs.

618 citations


Journal ArticleDOI
TL;DR: A major new version of the Monte Carlo event generator Herwig++ (version 3.0) is now available as mentioned in this paper, which is the first major release of version 7 of the Herwig event generator family.
Abstract: A major new release of the Monte Carlo event generator Herwig++ (version 3.0) is now available. This release marks the end of distinguishing Herwig++ and HERWIG development and therefore constitutes the first major release of version 7 of the Herwig event generator family. The new version features a number of significant improvements to the event simulation, including: built-in NLO hard process calculation for virtually all Standard Model processes, with matching to both angular-ordered and dipole shower modules via both subtractive (MC@NLO-type) and multiplicative (Powheg-type) algorithms; QED radiation and spin correlations in the angular-ordered shower; a consistent treatment of perturbative uncertainties within the hard process and parton showering. Several of the new features will be covered in detail in accompanying publications, and an update of the manual will follow in due course.

599 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown through a combination of surface spectroscopy and cyclic voltammetry studies that only materials with redox potentials in a targeted window react with polysulfides to form active surface bound polythionate species.
Abstract: The lithium-sulfur battery is a compelling energy storage system because its high theoretical energy density exceeds Li-ion batteries at much lower cost, but applications are thwarted by capacity decay caused by the polysulfide shuttle. Here, proof of concept and the critical metrics of a strategy to entrap polysulfides within the sulfur cathode by their reaction to form a surface-bound active redox mediator are demonstrated. It is shown through a combination of surface spectroscopy and cyclic voltammetry studies that only materials with redox potentials in a targeted window react with polysulfides to form active surface-bound polythionate species. These species are directly correlated to superior Li-S cell performance by electrochemical studies of high surface area oxide cathodes with redox potentials below, above, and within this window. Optimized Li-S cells yield a very low fade rate of 0.048% per cycle. The insight gained into the fundamental surface mechanism and its correlation to the stability of the electrochemical cell provides a bridge between mechanistic understanding and battery performance essential for the design of high performance Li-S cells.

590 citations


Journal ArticleDOI
TL;DR: The evolution of the various aluminum systems, starting from those based on aqueous electrolytes to, in more details, thosebased on non-aqueously electrolytes, are described, attempting to forecast their chances to reach the status of practical energy storage systems.
Abstract: A critical overview of the latest developments in the aluminum battery technologies is reported. The substitution of lithium with alternative metal anodes characterized by lower cost and higher abundance is nowadays one of the most widely explored paths to reduce the cost of electrochemical storage systems and enable long-term sustainability. Aluminum based secondary batteries could be a viable alternative to the present Li-ion technology because of their high volumetric capacity (8040 mAh cm(-3) for Al vs 2046 mAh cm(-3) for Li). Additionally, the low cost aluminum makes these batteries appealing for large-scale electrical energy storage. Here, we describe the evolution of the various aluminum systems, starting from those based on aqueous electrolytes to, in more details, those based on non-aqueous electrolytes. Particular attention has been dedicated to the latest development of electrolytic media characterized by low reactivity towards other cell components. The attention is then focused on electrode materials enabling the reversible aluminum intercalation-deintercalation process. Finally, we touch on the topic of high-capacity aluminum-sulfur batteries, attempting to forecast their chances to reach the status of practical energy storage systems.

Journal ArticleDOI
TL;DR: The state-of-the-art understanding of these global change pressures on soils is reported, knowledge gaps and research challenges are identified and actions and policies to minimize adverse environmental impacts arising from theseglobal change drivers are highlighted.
Abstract: Soils are subject to varying degrees of direct or indirect human disturbance, constituting a major global change driver. Factoring out natural from direct and indirect human influence is not always straightforward, but some human activities have clear impacts. These include land-use change, land management and land degradation (erosion, compaction, sealing and salinization). The intensity of land use also exerts a great impact on soils, and soils are also subject to indirect impacts arising from human activity, such as acid deposition (sulphur and nitrogen) and heavy metal pollution. In this critical review, we report the state-of-the-art understanding of these global change pressures on soils, identify knowledge gaps and research challenges and highlight actions and policies to minimize adverse environmental impacts arising from these global change drivers. Soils are central to considerations of what constitutes sustainable intensification. Therefore, ensuring that vulnerable and high environmental value soils are considered when protecting important habitats and ecosystems, will help to reduce the pressure on land from global change drivers. To ensure that soils are protected as part of wider environmental efforts, a global soil resilience programme should be considered, to monitor, recover or sustain soil fertility and function, and to enhance the ecosystem services provided by soils. Soils cannot, and should not, be considered in isolation of the ecosystems that they underpin and vice versa. The role of soils in supporting ecosystems and natural capital needs greater recognition. The lasting legacy of the International Year of Soils in 2015 should be to put soils at the centre of policy supporting environmental protection and sustainable development.

Journal ArticleDOI
TL;DR: It is recommended that future research efforts focus stronger on the causal understanding of why tree species classification approaches work under certain conditions or – maybe even more important - why they do not work in other cases as this might require more complex field acquisitions than those typically used in the reviewed studies.

Journal ArticleDOI
TL;DR: In this article, the authors estimated that between 1995 and 2005, the livestock sector was responsible for greenhouse gas emissions of 5.6-7.5GtCO(2)e yr(-1).
Abstract: The livestock sector supports about 1.3 billion producers and retailers, and contributes 40-50% of agricultural GDP. We estimated that between 1995 and 2005, the livestock sector was responsible for greenhouse gas emissions of 5.6-7.5GtCO(2)e yr(-1). Livestock accounts for up to half of the technical mitigation potential of the agriculture, forestry and land-use sectors, through management options that sustainably intensify livestock production, promote carbon sequestration in rangelands and reduce emissions from manures, and through reductions in the demand for livestock products. The economic potential of these management alternatives is less than 10% of what is technically possible because of adoption constraints, costs and numerous trade-offs. The mitigation potential of reductions in livestock product consumption is large, but their economic potential is unknown at present. More research and investment are needed to increase the affordability and adoption of mitigation practices, to moderate consumption of livestock products where appropriate, and to avoid negative impacts on livelihoods, economic activities and the environment.

Journal ArticleDOI
Jelle Aalbers1, F. Agostini2, M. Alfonsi3, F. D. Amaro4, Claude Amsler5, Elena Aprile6, Lior Arazi7, F. Arneodo8, P. Barrow9, Laura Baudis1, Laura Baudis9, M. L. Benabderrahmane8, T. Berger10, B. Beskers3, Amos Breskin7, P. A. Breur1, April S. Brown1, Ethan Brown10, S. Bruenner11, Giacomo Bruno, Ran Budnik7, Lukas Bütikofer5, J. Calvén12, João Cardoso4, D. Cichon11, D. Coderre5, Auke-Pieter Colijn1, Jan Conrad12, Jean-Pierre Cussonneau13, M. P. Decowski1, Sara Diglio13, Guido Drexlin14, Ehud Duchovni7, E. Erdal7, G. Eurin11, A. D. Ferella12, A. Fieguth15, W. Fulgione, A. Gallo Rosso, P. Di Gangi2, A. Di Giovanni8, Michelle Galloway9, M. Garbini2, C. Geis3, F. Glueck14, L. Grandi16, Z. Greene6, C. Grignon3, C. Hasterok11, Volker Hannen15, E. Hogenbirk1, J. Howlett6, D. Hilk14, C. Hils3, A. James9, B. Kaminsky5, Shingo Kazama9, Benjamin Kilminster9, A. Kish9, Lawrence M. Krauss17, H. Landsman7, R. F. Lang18, Qing Lin6, F. L. Linde1, Sebastian Lindemann11, Manfred Lindner11, J. A. M. Lopes4, Marrodan T. Undagoitia11, Julien Masbou13, F. V. Massoli2, D. Mayani9, M. Messina6, K. Micheneau13, A. Molinario, K. Morå12, E. Morteau13, M. Murra15, J. Naganoma19, Jayden L. Newstead17, Kaixuan Ni20, Uwe Oberlack3, P. Pakarha9, Bart Pelssers12, P. de Perio6, R. Persiani13, F. Piastra9, M.-C. Piro10, G. Plante6, L. Rauch11, S. Reichard18, A. Rizzo6, N. Rupp11, J.M.F. dos Santos4, G. Sartorelli2, M. Scheibelhut3, S. Schindler3, Marc Schumann5, Marc Schumann21, Jochen Schreiner11, L. Scotto Lavina13, M. Selvi2, P. Shagin19, Miguel Silva4, Hardy Simgen11, P. Sissol3, M. von Sivers5, D. Thers13, J. Thurn22, A. Tiseni1, Roberto Trotta23, C. Tunnell1, Kathrin Valerius14, M. Vargas15, Hongwei Wang24, Yuehuan Wei9, Ch. Weinheimer15, T. Wester22, J. Wulf9, Yanxi Zhang6, T. Zhu9, Kai Zuber22 
TL;DR: DARk matter WImp search with liquid xenoN (DARWIN) as mentioned in this paper is an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core.
Abstract: DARk matter WImp search with liquid xenoN (DARWIN(2)) will be an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core. Its primary g ...

Journal ArticleDOI
TL;DR: This Review focuses on ternary polymer electrolytes, that is, ion-conducting systems consisting of a polymer incorporating two salts, one bearing the lithium cation and the other introducing additional anions capable of plasticizing the polymer chains.
Abstract: The advent of solid-state polymer electrolytes for application in lithium batteries took place more than four decades ago when the ability of polyethylene oxide (PEO) to dissolve suitable lithium salts was demonstrated. Since then, many modifications of this basic system have been proposed and tested, involving the addition of conventional, carbonate-based electrolytes, low molecular weight polymers, ceramic fillers, and others. This Review focuses on ternary polymer electrolytes, that is, ion-conducting systems consisting of a polymer incorporating two salts, one bearing the lithium cation and the other introducing additional anions capable of plasticizing the polymer chains. Assessing the state of the research field of solid-state, ternary polymer electrolytes, while giving background on the whole field of polymer electrolytes, this Review is expected to stimulate new thoughts and ideas on the challenges and opportunities of lithium-metal batteries.

Journal ArticleDOI
TL;DR: In this article, the authors developed and analyzed a global soil visible-near infrared (vis-NIR) spectral library, which is currently the largest and most diverse database of its kind, and showed that the information encoded in the spectra can describe soil composition and be associated to land cover and its global geographic distribution, which acts as a surrogate for global climate variability.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: The MovieQA dataset as discussed by the authors consists of 14,944 questions about 408 movies with high semantic diversity, ranging from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred.
Abstract: We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler "Who" did "What" to "Whom", to "Why" and "How" certain events occurred. Each question comes with a set of five possible answers, a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information – video clips, plots, subtitles, scripts, and DVS [32]. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.

Book ChapterDOI
01 Nov 2016
TL;DR: In this article, the authors survey recent trends in practical algorithms for balanced graph partitioning, point to applications, and discuss future research directions, and present a survey of the most popular algorithms.
Abstract: We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.

Journal ArticleDOI
Jasmin Tröstl1, Wayne Chuang2, Hamish Gordon3, Martin Heinritzi4, Chao Yan5, Ugo Molteni1, Lars Ahlm6, Carla Frege1, F. Bianchi5, F. Bianchi1, F. Bianchi7, Robert Wagner5, Mario Simon4, Katrianne Lehtipalo5, Katrianne Lehtipalo1, Christina Williamson4, Christina Williamson8, Christina Williamson9, J. S. Craven10, Jonathan Duplissy11, Jonathan Duplissy5, Alexey Adamov5, Joao Almeida3, Anne-Kathrin Bernhammer12, Martin Breitenlechner12, Sophia Brilke4, Antonio Dias3, Sebastian Ehrhart3, Richard C. Flagan10, Alessandro Franchin5, Claudia Fuchs1, Roberto Guida3, Martin Gysel1, Armin Hansel12, Christopher R. Hoyle1, Tuija Jokinen5, Heikki Junninen5, Juha Kangasluoma5, Helmi Keskinen8, Helmi Keskinen13, Helmi Keskinen5, Jaeseok Kim13, Jaeseok Kim8, Manuel Krapf1, Andreas Kürten4, Ari Laaksonen13, Ari Laaksonen14, Michael J. Lawler13, Michael J. Lawler15, Markus Leiminger4, Serge Mathot3, Ottmar Möhler16, Tuomo Nieminen11, Tuomo Nieminen5, Antti Onnela3, Tuukka Petäjä5, Felix Piel4, Pasi Miettinen13, Matti P. Rissanen5, Linda Rondo4, Nina Sarnela5, Siegfried Schobesberger5, Siegfried Schobesberger8, Kamalika Sengupta17, Mikko Sipilä5, James N. Smith18, James N. Smith13, Gerhard Steiner12, Gerhard Steiner19, Gerhard Steiner5, António Tomé20, Annele Virtanen13, Andrea Christine Wagner4, Ernest Weingartner8, Ernest Weingartner1, Daniela Wimmer4, Daniela Wimmer5, Paul M. Winkler19, Penglin Ye2, Kenneth S. Carslaw17, Joachim Curtius4, Josef Dommen1, Jasper Kirkby3, Jasper Kirkby4, Markku Kulmala5, Ilona Riipinen6, Douglas R. Worsnop11, Douglas R. Worsnop5, Neil M. Donahue5, Neil M. Donahue2, Urs Baltensperger1 
26 May 2016-Nature
TL;DR: It is shown that organic vapours alone can drive nucleation, and a particle growth model is presented that quantitatively reproduces the measurements and implements a parameterization of the first steps of growth in a global aerosol model that can change substantially in response to concentrations of atmospheric cloud concentration nuclei.
Abstract: About half of present-day cloud condensation nuclei originate from atmospheric nucleation, frequently appearing as a burst of new particles near midday. Atmospheric observations show that the growth rate of new particles often accelerates when the diameter of the particles is between one and ten nanometres. In this critical size range, new particles are most likely to be lost by coagulation with pre-existing particles, thereby failing to form new cloud condensation nuclei that are typically 50 to 100 nanometres across. Sulfuric acid vapour is often involved in nucleation but is too scarce to explain most subsequent growth, leaving organic vapours as the most plausible alternative, at least in the planetary boundary layer. Although recent studies predict that low-volatility organic vapours contribute during initial growth, direct evidence has been lacking. The accelerating growth may result from increased photolytic production of condensable organic species in the afternoon, and the presence of a possible Kelvin (curvature) effect, which inhibits organic vapour condensation on the smallest particles (the nano-Kohler theory), has so far remained ambiguous. Here we present experiments performed in a large chamber under atmospheric conditions that investigate the role of organic vapours in the initial growth of nucleated organic particles in the absence of inorganic acids and bases such as sulfuric acid or ammonia and amines, respectively. Using data from the same set of experiments, it has been shown that organic vapours alone can drive nucleation. We focus on the growth of nucleated particles and find that the organic vapours that drive initial growth have extremely low volatilities (saturation concentration less than 10(-4.5) micrograms per cubic metre). As the particles increase in size and the Kelvin barrier falls, subsequent growth is primarily due to more abundant organic vapours of slightly higher volatility (saturation concentrations of 10(-4.5) to 10(-0.5) micrograms per cubic metre). We present a particle growth model that quantitatively reproduces our measurements. Furthermore, we implement a parameterization of the first steps of growth in a global aerosol model and find that concentrations of atmospheric cloud concentration nuclei can change substantially in response, that is, by up to 50 per cent in comparison with previously assumed growth rate parameterizations.

Journal ArticleDOI
25 Aug 2016-Nature
TL;DR: It is demonstrated that primary producers, herbivorous insects and microbial decomposers seem to be particularly important drivers of ecosystem functioning, as shown by the strong and frequent positive associations of their richness or abundance with multiple ecosystem services.
Abstract: Many experiments have shown that loss of biodiversity reduces the capacity of ecosystems to provide the multiple services on which humans depend. However, experiments necessarily simplify the complexity of natural ecosystems and will normally control for other important drivers of ecosystem functioning, such as the environment or land use. In addition, existing studies typically focus on the diversity of single trophic groups, neglecting the fact that biodiversity loss occurs across many taxa and that the functional effects of any trophic group may depend on the abundance and diversity of others. Here we report analysis of the relationships between the species richness and abundance of nine trophic groups, including 4,600 above- and below-ground taxa, and 14 ecosystem services and functions and with their simultaneous provision (or multifunctionality) in 150 grasslands. We show that high species richness in multiple trophic groups (multitrophic richness) had stronger positive effects on ecosystem services than richness in any individual trophic group; this includes plant species richness, the most widely used measure of biodiversity. On average, three trophic groups influenced each ecosystem service, with each trophic group influencing at least one service. Multitrophic richness was particularly beneficial for 'regulating' and 'cultural' services, and for multifunctionality, whereas a change in the total abundance of species or biomass in multiple trophic groups (the multitrophic abundance) positively affected supporting services. Multitrophic richness and abundance drove ecosystem functioning as strongly as abiotic conditions and land-use intensity, extending previous experimental results to real-world ecosystems. Primary producers, herbivorous insects and microbial decomposers seem to be particularly important drivers of ecosystem functioning, as shown by the strong and frequent positive associations of their richness or abundance with multiple ecosystem services. Our results show that multitrophic richness and abundance support ecosystem functioning, and demonstrate that a focus on single groups has led to researchers to greatly underestimate the functional importance of biodiversity.

Journal ArticleDOI
TL;DR: Ionic liquids and their solid-state analogues, organic ionic plastic crystals, have recently emerged as important materials for renewable energy applications as discussed by the authors, and their application as electrolytes for batteries, capacitors, photovoltaics, fuel cells and CO2 reduction.
Abstract: Ionic liquids and their solid-state analogues, organic ionic plastic crystals, have recently emerged as important materials for renewable energy applications. This Review highlights recent advances in the synthesis of these materials and their application as electrolytes for batteries, capacitors, photovoltaics, fuel cells and CO2 reduction.

Journal ArticleDOI
M. Aguilar, L. Ali Cavasonza1, Behcet Alpat2, G. Ambrosi2  +265 moreInstitutions (39)
TL;DR: In the absolute rigidity range ∼60 to ∼500 GV, the antiproton p[over ¯], proton p, and positron e^{+} fluxes are found to have nearly identical rigidity dependence and the electron e^{-} flux exhibits a different rigidity dependent.
Abstract: A precision measurement by AMS of the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays in the absolute rigidity range from 1 to 450 GV is presented based on 3.49 × 105 antiproton events and 2.42 × 109 proton events. The fluxes and flux ratios of charged elementary particles in cosmic rays are also presented. In the absolute rigidity range ∼60 to ∼500 GV, the antiproton ¯p, proton p, and positron eþ fluxes are found to have nearly identical rigidity dependence and the electron e− flux exhibits a different rigidity dependence. Below 60 GV, the ( ¯ p=p), ( ¯ p=eþ), and (p=eþ) flux ratios each reaches a maximum. From ∼60 to ∼500 GV, the ( ¯ p=p), ( ¯ p=eþ), and (p=eþ) flux ratios show no rigidity dependence. These are new observations of the properties of elementary particles in the cosmos.

Journal ArticleDOI
TL;DR: It is demonstrated that pyrolysis of polymeric microlattices can overcome limitations and create ultra-strong glassy carbon nanolattices with single struts shorter than 1 μm and diameters as small as 200 nm, which represent the smallest lattice structures yet produced.
Abstract: The strength of lightweight mechanical metamaterials, which aim to exploit material-strengthening size effects by their microscale lattice structure, has been limited by the resolution of three-dimensional lithography technologies and their restriction to mainly polymer resins. Here, we demonstrate that pyrolysis of polymeric microlattices can overcome these limitations and create ultra-strong glassy carbon nanolattices with single struts shorter than 1 μm and diameters as small as 200 nm. They represent the smallest lattice structures yet produced--achieved by an 80% shrinkage of the polymer during pyrolysis--and exhibit material strengths of up to 3 GPa, corresponding approximately to the theoretical strength of glassy carbon. The strength-to-density ratios of the nanolattices are six times higher than those of reported microlattices. With a honeycomb topology, effective strengths of 1.2 GPa at 0.6 g cm(-3) are achieved. Diamond is the only bulk material with a notably higher strength-to-density ratio.

Journal ArticleDOI
TL;DR: Here the complex supramolecular landscape of a platinum(II) compound is characterized fully and controlled successfully through a combination of supramolescular and photochemical approaches.
Abstract: The self-assembly of chemical entities represents a very attractive way to create a large variety of ordered functional structures and complex matter. Although much effort has been devoted to the preparation of supramolecular nanostructures based on different chemical building blocks, an understanding of the mechanisms at play and the ability to monitor assembly processes and, in turn, control them are often elusive, which precludes a deep and comprehensive control of the final structures. Here the complex supramolecular landscape of a platinum(II) compound is characterized fully and controlled successfully through a combination of supramolecular and photochemical approaches. The supramolecular assemblies comprise two kinetic assemblies and their thermodynamic counterpart. The monitoring of the different emission properties of the aggregates, used as a fingerprint for each species, allows the real-time visualization of the evolving self-assemblies. The control of multiple supramolecular pathways will help the design of complex systems in and out of their thermodynamic equilibrium.

Journal ArticleDOI
24 Nov 2016-Nature
TL;DR: Atomistic simulations reproduce the experimental observations of layer-dependent friction and transient frictional strengthening on graphene and reveal that the evolution of static friction is a manifestation of the natural tendency for thinner and less-constrained graphene to re-adjust its configuration as a direct consequence of its greater flexibility.
Abstract: Graphite and other lamellar materials are used as dry lubricants for macroscale metallic sliding components and high-pressure contacts. It has been shown experimentally that monolayer graphene exhibits higher friction than multilayer graphene and graphite, and that this friction increases with continued sliding, but the mechanism behind this remains subject to debate. It has long been conjectured that the true contact area between two rough bodies controls interfacial friction. The true contact area, defined for example by the number of atoms within the range of interatomic forces, is difficult to visualize directly but characterizes the quantity of contact. However, there is emerging evidence that, for a given pair of materials, the quality of the contact can change, and that this can also strongly affect interfacial friction. Recently, it has been found that the frictional behaviour of two-dimensional materials exhibits traits unlike those of conventional bulk materials. This includes the abovementioned finding that for few-layer two-dimensional materials the static friction force gradually strengthens for a few initial atomic periods before reaching a constant value. Such transient behaviour, and the associated enhancement of steady-state friction, diminishes as the number of two-dimensional layers increases, and was observed only when the two-dimensional material was loosely adhering to a substrate. This layer-dependent transient phenomenon has not been captured by any simulations. Here, using atomistic simulations, we reproduce the experimental observations of layer-dependent friction and transient frictional strengthening on graphene. Atomic force analysis reveals that the evolution of static friction is a manifestation of the natural tendency for thinner and less-constrained graphene to re-adjust its configuration as a direct consequence of its greater flexibility. That is, the tip atoms become more strongly pinned, and show greater synchrony in their stick-slip behaviour. While the quantity of atomic-scale contacts (true contact area) evolves, the quality (in this case, the local pinning state of individual atoms and the overall commensurability) also evolves in frictional sliding on graphene. Moreover, the effects can be tuned by pre-wrinkling. The evolving contact quality is critical for explaining the time-dependent friction of configurationally flexible interfaces.

Journal ArticleDOI
TL;DR: A critical overview is given on essential noncovalent interactions in synthetic supramolecular complexes, accompanied by analyses with selected proteins, and promises and limitations of these strategies are discussed.
Abstract: On the basis of many literature measurements, a critical overview is given on essential noncovalent interactions in synthetic supramolecular complexes, accompanied by analyses with selected proteins. The methods, which can be applied to derive binding increments for single noncovalent interactions, start with the evaluation of consistency and additivity with a sufficiently large number of different host–guest complexes by applying linear free energy relations. Other strategies involve the use of double mutant cycles, of molecular balances, of dynamic combinatorial libraries, and of crystal structures. Promises and limitations of these strategies are discussed. Most of the analyses stem from solution studies, but a few also from gas phase. The empirically derived interactions are then presented on the basis of selected complexes with respect to ion pairing, hydrogen bonding, electrostatic contributions, halogen bonding, π–π-stacking, dispersive forces, cation−π and anion−π interactions, and contributions f...

Journal ArticleDOI
TL;DR: This tutorial review summarizes recent progress in this newly emerging field of metal-organic frameworks and constrain the local coordination geometries of lanthanide centers to represent two key factors governing the magnetic properties of molecular nanomagnets.
Abstract: Single-molecule magnets (SMMs) and single-chain magnets (SCMs), also known as molecular nanomagnets, are molecular species of nanoscale proportions with the potential for high information storage density and spintronics applications. Metal–organic frameworks (MOFs) are three-dimensional ordered assemblies of inorganic nodes and organic linkers, featuring structural diversity and multiple chemical and physical properties. The concept of using these frameworks as scaffolds in the study of molecular nanomagnets provides an opportunity to constrain the local coordination geometries of lanthanide centers and organize the individual magnetic building blocks (MBBs, including both transition-metal and lanthanide MBBs) into topologically well-defined arrays that represent two key factors governing the magnetic properties of molecular nanomagnets. In this tutorial review, we summarize recent progress in this newly emerging field.

Journal ArticleDOI
TL;DR: The GEISA database (Gestion et Etude des Informations Spectroscopiques Atmospheriques: Management and Study of Atmospheric Spectroscopic Information) has been developed and maintained by the ARA/ABC(t) group at LMD since 1974.

Journal ArticleDOI
TL;DR: In this paper, the authors observed the atmospheric mixing layer height (MLH) in Beijing from July-2009 to December-2012 using a ceilometer and found that the estimated MLH is low in autumn and winter and high in spring and summer in Beijing.
Abstract: . The mixing layer is an important meteorological factor that affects air pollution. In this study, the atmospheric mixing layer height (MLH) was observed in Beijing from July 2009 to December 2012 using a ceilometer. By comparison with radiosonde data, we found that the ceilometer underestimates the MLH under conditions of neutral stratification caused by strong winds, whereas it overestimates the MLH when sand-dust is crossing. Using meteorological, PM2.5, and PM10 observational data, we screened the observed MLH automatically; the ceilometer observations were fairly consistent with the radiosondes, with a correlation coefficient greater than 0.9. Further analysis indicated that the MLH is low in autumn and winter and high in spring and summer in Beijing. There is a significant correlation between the sensible heat flux and MLH, and the diurnal cycle of the MLH in summer is also affected by the circulation of mountainous plain winds. Using visibility as an index to classify the degree of air pollution, we found that the variation in the sensible heat and buoyancy term in turbulent kinetic energy (TKE) is insignificant when visibility decreases from 10 to 5 km, but the reduction of shear term in TKE is near 70 %. When visibility decreases from 5 to 1 km, the variation of the shear term in TKE is insignificant, but the decrease in the sensible heat and buoyancy term in TKE is approximately 60 %. Although the correlation between the daily variation of the MLH and visibility is very poor, the correlation between them is significantly enhanced when the relative humidity increases beyond 80 %. This indicates that humidity-related physicochemical processes is the primary source of atmospheric particles under heavy pollution and that the dissipation of atmospheric particles mainly depends on the MLH. The presented results of the atmospheric mixing layer provide useful empirical information for improving meteorological and atmospheric chemistry models and the forecasting and warning of air pollution.