scispace - formally typeset
Search or ask a question
Institution

University of Bremen

EducationBremen, Germany
About: University of Bremen is a education organization based out in Bremen, Germany. It is known for research contribution in the topics: Population & Glacial period. The organization has 14563 authors who have published 37279 publications receiving 970381 citations. The organization is also known as: Universität Bremen.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a 33P-radiotracer was used to identify organic-rich sediments from the Benguela upwelling system, Namibia, and tracked the fate of the phosphorus.
Abstract: Organic phosphorus is removed from the ocean by its conversion to phosphorite. Laboratory incubations suggest that bacteria catalyse phosphorite formation, and that the rate of conversion is greatest under anoxic conditions. Phosphorus is an essential nutrient for life. In the ocean, phosphorus burial regulates marine primary production1,2. Phosphorus is removed from the ocean by sedimentation of organic matter, and the subsequent conversion of organic phosphorus to phosphate minerals such as apatite, and ultimately phosphorite deposits3,4. Bacteria are thought to mediate these processes5, but the mechanism of sequestration has remained unclear. Here, we present results from laboratory incubations in which we labelled organic-rich sediments from the Benguela upwelling system, Namibia, with a 33P-radiotracer, and tracked the fate of the phosphorus. We show that under both anoxic and oxic conditions, large sulphide-oxidizing bacteria accumulate 33P in their cells, and catalyse the nearly instantaneous conversion of phosphate to apatite. Apatite formation was greatest under anoxic conditions. Nutrient analyses of Namibian upwelling waters and sediments suggest that the rate of phosphate-to-apatite conversion beneath anoxic bottom waters exceeds the rate of phosphorus release during organic matter mineralization in the upper sediment layers. We suggest that bacterial apatite formation is a significant phosphorus sink under anoxic bottom-water conditions. Expanding oxygen minimum zones are projected in simulations of future climate change6, potentially increasing sequestration of marine phosphate, and restricting marine productivity.

189 citations

Journal ArticleDOI
TL;DR: Computational methods for analyzing MALDI-imaging data with the emphasis on multivariate statistical methods are outlined, their pros and cons are discussed, and their recommendations on their application are given.
Abstract: Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) imaging mass spectrometry, also called MALDI-imaging, is a label-free bioanalytical technique used for spatially-resolved chemical analysis of a sample. Usually, MALDI-imaging is exploited for analysis of a specially prepared tissue section thaw mounted onto glass slide. A tremendous development of the MALDI-imaging technique has been observed during the last decade. Currently, it is one of the most promising innovative measurement techniques in biochemistry and a powerful and versatile tool for spatially-resolved chemical analysis of diverse sample types ranging from biological and plant tissues to bio and polymer thin films. In this paper, we outline computational methods for analyzing MALDI-imaging data with the emphasis on multivariate statistical methods, discuss their pros and cons, and give recommendations on their application. The methods of unsupervised data mining as well as supervised classification methods for biomarker discovery are elucidated. We also present a high-throughput computational pipeline for interpretation of MALDI-imaging data using spatial segmentation. Finally, we discuss current challenges associated with the statistical analysis of MALDI-imaging data.

189 citations

Journal ArticleDOI
19 Jul 2018-Nature
TL;DR: A 26-million-year record of equatorial sea surface temperatures reveals synchronous changes of tropical and polar temperatures during the Eocene epoch forced by variations in concentrations of atmospheric carbon dioxide, with a constant degree of polar amplification.
Abstract: Palaeoclimate reconstructions of periods with warm climates and high atmospheric CO2 concentrations are crucial for developing better projections of future climate change. Deep-ocean1,2 and high-latitude3 palaeotemperature proxies demonstrate that the Eocene epoch (56 to 34 million years ago) encompasses the warmest interval of the past 66 million years, followed by cooling towards the eventual establishment of ice caps on Antarctica. Eocene polar warmth is well established, so the main obstacle in quantifying the evolution of key climate parameters, such as global average temperature change and its polar amplification, is the lack of continuous high-quality tropical temperature reconstructions. Here we present a continuous Eocene equatorial sea surface temperature record, based on biomarker palaeothermometry applied on Atlantic Ocean sediments. We combine this record with the sparse existing data4-6 to construct a 26-million-year multi-proxy, multi-site stack of Eocene tropical climate evolution. We find that tropical and deep-ocean temperatures changed in parallel, under the influence of both long-term climate trends and short-lived events. This is consistent with the hypothesis that greenhouse gas forcing7,8, rather than changes in ocean circulation9,10, was the main driver of Eocene climate. Moreover, we observe a strong linear relationship between tropical and deep-ocean temperatures, which implies a constant polar amplification factor throughout the generally ice-free Eocene. Quantitative comparison with fully coupled climate model simulations indicates that global average temperatures were about 29, 26, 23 and 19 degrees Celsius in the early, early middle, late middle and late Eocene, respectively, compared to the preindustrial temperature of 14.4 degrees Celsius. Finally, combining proxy- and model-based temperature estimates with available CO2 reconstructions8 yields estimates of an Eocene Earth system sensitivity of 0.9 to 2.3 kelvin per watt per square metre at 68 per cent probability, consistent with the high end of previous estimates11.

188 citations

Journal ArticleDOI
TL;DR: This review aimed to present a wider view of the synthesis of various MoS2-based nanocomposites for sensor and biosensor applications, and highlighted the potential methods like self-assembly, hydrothermal reaction, chemical vapour deposition, electrospinning, as well as microwave and laser beam treatments.
Abstract: Molybdenum disulfide (MoS2) is a typical layered transition-metal dichalcogenide material, which has aroused a great deal of interest in the past few years. Recently, more and more attention has been focused on the synthesis and applications of MoS2-based nanocomposites. In this review, we aimed to present a wider view of the synthesis of various MoS2-based nanocomposites for sensor and biosensor applications. We highlighted the potential methods like self-assembly, hydrothermal reaction, chemical vapour deposition, electrospinning, as well as microwave and laser beam treatments for the successful preparation of MoS2-based nanocomposites. On the other hand, three representative types of detection devices fabricated by the MoS2-based nanocomposites, field-effect transistor, optical, and electrochemical sensors, were introduced in detail and discussed fully. The relationships between the sensing performances and the special nanostructures within the MoS2-based nanocomposites were presented and discussed.

188 citations

Book ChapterDOI
01 Jan 1994
TL;DR: The purpose of this paper is to draw the reader's attention to the problems of the expected value criterion in Markov decision processes and to give Dynamic Programming algorithms for an alternative criterion, namely the minimax criterion.
Abstract: Most Reinforcement Learning (RL) work supposes policies for sequential decision tasks to be optimal that minimize the expected total discounted cost (e.g. Q-Learning; AHC architecture). On the other hand, it is well known that it is not always reliable and can be treacherous to use the expected value as a decision criterion. A lot of alternative decision criteria have been suggested in decision theory to get a more sophisticated consideration of risk but most RL researchers have not concerned themselves with this subject until now. The purpose of this paper is to draw the reader's attention to the problems of the expected value criterion in Markov decision processes and to give Dynamic Programming algorithms for an alternative criterion, namely the minimax criterion. A counterpart to Watkins' Q-Learning with regard to the minimax criterion is presented. The new algorithm, called Qˆ-learning, finds policies that minimize the worst-case total discounted cost.

188 citations


Authors

Showing all 14961 results

NameH-indexPapersCitations
Roger Y. Tsien163441138267
Klaus-Robert Müller12976479391
Ron Kikinis12668463398
Ulrich S. Schubert122222985604
Andreas Richter11076948262
Michael Böhm10875566103
Juan Bisquert10745046267
John P. Sumpter10126646184
Jos Lelieveld10057037657
Michael Schulz10075950719
Peter Singer9470237128
Charles R. Tyler9232531724
John P. Burrows9081536169
Hans-Peter Kriegel8944473932
Harald Haas8575034927
Network Information
Related Institutions (5)
ETH Zurich
122.4K papers, 5.1M citations

93% related

University of Hamburg
89.2K papers, 2.8M citations

92% related

Centre national de la recherche scientifique
382.4K papers, 13.6M citations

92% related

Technische Universität München
123.4K papers, 4M citations

91% related

École Polytechnique Fédérale de Lausanne
98.2K papers, 4.3M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023343
2022709
20212,106
20202,309
20192,191
20181,965