scispace - formally typeset
Search or ask a question

Showing papers by "Technische Universität Darmstadt published in 2016"


Proceedings ArticleDOI
01 Jun 2016
TL;DR: This work introduces Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling, and exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity.
Abstract: Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.

7,547 citations


Posted Content
TL;DR: Cityscapes as discussed by the authors is a large-scale dataset for semantic urban scene understanding, consisting of 5000 images with high quality pixel-level annotations and 200,000 additional images with coarse annotations.
Abstract: Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.

3,503 citations


Book ChapterDOI
08 Oct 2016
TL;DR: In this paper, the authors present an approach to rapidly create pixel-accurate semantic label maps for images extracted from modern computer games, which enables rapid propagation of semantic labels within and across images synthesized by the game, without access to the source code or the content.
Abstract: Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just \(\tfrac{1}{3}\) of the CamVid training set outperform models trained on the complete CamVid training set.

1,319 citations


Journal ArticleDOI
TL;DR: The theoretical calculations demonstrate that the large anomalous Hall effect in Mn3Ge originates from a nonvanishing Berry curvature that arises from the chiral spin structure, and that also results in a large spin Hall effect comparable to that of platinum.
Abstract: It is well established that the anomalous Hall effect displayed by a ferromagnet scales with its magnetization. Therefore, an antiferromagnet that has no net magnetization should exhibit no anomalous Hall effect. We show that the noncolinear triangular antiferromagnet Mn3Ge exhibits a large anomalous Hall effect comparable to that of ferromagnetic metals; the magnitude of the anomalous conductivity is ~500 (ohm·cm)−1 at 2 K and ~50 (ohm·cm)−1 at room temperature. The angular dependence of the anomalous Hall effect measurements confirms that the small residual in-plane magnetic moment has no role in the observed effect except to control the chirality of the spin triangular structure. Our theoretical calculations demonstrate that the large anomalous Hall effect in Mn3Ge originates from a nonvanishing Berry curvature that arises from the chiral spin structure, and that also results in a large spin Hall effect of 1100 (ħ/e) (ohm·cm)−1, comparable to that of platinum. The present results pave the way toward the realization of room temperature antiferromagnetic spintronics and spin Hall effect–based data storage devices.

638 citations


Posted Content
TL;DR: It is shown that associations between image patches can be reconstructed from the communication between the game and the graphics hardware, which enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content.
Abstract: Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just 1/3 of the CamVid training set outperform models trained on the complete CamVid training set.

591 citations


Journal ArticleDOI
25 Aug 2016-Nature
TL;DR: It is demonstrated that primary producers, herbivorous insects and microbial decomposers seem to be particularly important drivers of ecosystem functioning, as shown by the strong and frequent positive associations of their richness or abundance with multiple ecosystem services.
Abstract: Many experiments have shown that loss of biodiversity reduces the capacity of ecosystems to provide the multiple services on which humans depend. However, experiments necessarily simplify the complexity of natural ecosystems and will normally control for other important drivers of ecosystem functioning, such as the environment or land use. In addition, existing studies typically focus on the diversity of single trophic groups, neglecting the fact that biodiversity loss occurs across many taxa and that the functional effects of any trophic group may depend on the abundance and diversity of others. Here we report analysis of the relationships between the species richness and abundance of nine trophic groups, including 4,600 above- and below-ground taxa, and 14 ecosystem services and functions and with their simultaneous provision (or multifunctionality) in 150 grasslands. We show that high species richness in multiple trophic groups (multitrophic richness) had stronger positive effects on ecosystem services than richness in any individual trophic group; this includes plant species richness, the most widely used measure of biodiversity. On average, three trophic groups influenced each ecosystem service, with each trophic group influencing at least one service. Multitrophic richness was particularly beneficial for 'regulating' and 'cultural' services, and for multifunctionality, whereas a change in the total abundance of species or biomass in multiple trophic groups (the multitrophic abundance) positively affected supporting services. Multitrophic richness and abundance drove ecosystem functioning as strongly as abiotic conditions and land-use intensity, extending previous experimental results to real-world ecosystems. Primary producers, herbivorous insects and microbial decomposers seem to be particularly important drivers of ecosystem functioning, as shown by the strong and frequent positive associations of their richness or abundance with multiple ecosystem services. Our results show that multitrophic richness and abundance support ecosystem functioning, and demonstrate that a focus on single groups has led to researchers to greatly underestimate the functional importance of biodiversity.

486 citations


Journal ArticleDOI
TL;DR: This work presents a simple and feasible way to reduce the contribution of inorganic metal species in some cases even down to zero and comprises the highest density of FeN4 sites ever reported without interference of in organic metal sites.
Abstract: Today, most metal and nitrogen doped carbon catalysts for ORR reveal a heterogeneous composition. This can be reasoned by a nonoptimized precursor composition and various steps in the preparation process to get the required active material. The significant presence of inorganic metal species interferes with the assignment of descriptors related to the ORR activity and stability. In this work we present a simple and feasible way to reduce the contribution of inorganic metal species in some cases even down to zero. Such catalysts reveal the desired homogeneous composition of MeN4 (Me = metal) sites in the carbon that is accompanied by a significant enhancement in ORR activity. Among the work of other international groups, our iron-based catalyst comprises the highest density of FeN4 sites ever reported without interference of inorganic metal sites.

392 citations


Journal ArticleDOI
TL;DR: In this article, the spectral kurtosis (SK) technique is extended to that of a function of frequency that indicates how the impulsiveness of a signal can be detected and analyzed.

378 citations


Journal ArticleDOI
30 Nov 2016-Nature
TL;DR: It is shown that even moderate increases in local land-use intensity (LUI) cause biotic homogenization across microbial, plant and animal groups, both above- and belowground, and that this is largely independent of changes in α-diversity.
Abstract: Land-use intensification is a major driver of biodiversity loss. Alongside reductions in local species diversity, biotic homogenization at larger spatial scales is of great concern for conservation. Biotic homogenization means a decrease in β-diversity (the compositional dissimilarity between sites). Most studies have investigated losses in local (α)-diversity and neglected biodiversity loss at larger spatial scales. Studies addressing β-diversity have focused on single or a few organism groups (for example, ref. 4), and it is thus unknown whether land-use intensification homogenizes communities at different trophic levels, above- and belowground. Here we show that even moderate increases in local land-use intensity (LUI) cause biotic homogenization across microbial, plant and animal groups, both above- and belowground, and that this is largely independent of changes in α-diversity. We analysed a unique grassland biodiversity dataset, with abundances of more than 4,000 species belonging to 12 trophic groups. LUI, and, in particular, high mowing intensity, had consistent effects on β-diversity across groups, causing a homogenization of soil microbial, fungal pathogen, plant and arthropod communities. These effects were nonlinear and the strongest declines in β-diversity occurred in the transition from extensively managed to intermediate intensity grassland. LUI tended to reduce local α-diversity in aboveground groups, whereas the α-diversity increased in belowground groups. Correlations between the β-diversity of different groups, particularly between plants and their consumers, became weaker at high LUI. This suggests a loss of specialist species and is further evidence for biotic homogenization. The consistently negative effects of LUI on landscape-scale biodiversity underscore the high value of extensively managed grasslands for conserving multitrophic biodiversity and ecosystem service provision. Indeed, biotic homogenization rather than local diversity loss could prove to be the most substantial consequence of land-use intensification.

345 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify two parameters computed from the pre-collapse structure of the progenitor, which in combination allow for a clear separation of exploding and non-exploding cases with only few exceptions (1.5%) in a set of 621 investigated stellar models.
Abstract: Thus far, judging the fate of a massive star (either a neutron star (NS) or a black hole) solely by its structure prior to core collapse has been ambiguous. Our work and previous attempts find a non-monotonic variation of successful and failed supernovae with zero-age main-sequence mass, for which no single structural parameter can serve as a good predictive measure. However, we identify two parameters computed from the pre-collapse structure of the progenitor, which in combination allow for a clear separation of exploding and non-exploding cases with only few exceptions ( 1‐2.5%) in our set of 621 investigated stellar models. One parameter is M4, defining the enclosed mass for a dimensionless entropy per nucleon of s = 4, and the other is 4 dm=drjs=4, being the mass-derivative at this location. The two parameters 4 and M4 4 can be directly linked to the mass-infall rate, ˙ M, of the collapsing star and the electron-type neutrino luminosity of the accreting proto-NS, L e / Mns ˙ M, which play a crucial role in the “critical luminosity” concept for the theoretical description of neutrino-driven explosions as runaway phenomenon of the stalled accretion shock. All models were evolved employing the approach of Ugliano et al. for simulating neutrino-driven explosions in spherical symmetry. The neutrino emission of the accretion layer is approximated by a gray transport solver, while the uncertain neutrino emission of the 1.1 M proto-NS core is parametrized by an analytic model. The free parameters connected to the core-boundary prescription are calibrated to reproduce the observables of Supernova 1987A for five di erent progenitor models. Subject headings: supernovae: general — stars: massive — hydrodynamics — neutrinos

343 citations


Journal ArticleDOI
TL;DR: Bayesian optimization, a model-based approach to black-box optimization under uncertainty, is evaluated on both simulated problems and real robots, demonstrating that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments.
Abstract: Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parametrization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this article, we thoroughly discuss multiple automatic optimization methods in the context of gait optimization. We extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments.

Journal ArticleDOI
TL;DR: Several mechanisms for the formation of a 90° misfit dislocation at the Ge–Si interface are identified when the common neighbor analysis method is used to construct transition paths connecting a homogeneously strained Ge film and a film containing a misfits dislocation.

Journal ArticleDOI
16 Mar 2016
TL;DR: The ICS cybersecurity landscape is explored including the key principles and unique aspects of ICS operation, a brief history of cyberattacks on ICS, an overview of I CS security assessment, and a survey of “uniquely-ICS” testbeds that capture the interactions between the various layers of an ICS.
Abstract: Industrial control systems (ICSs) are transitioning from legacy-electromechanical-based systems to modern information and communication technology (ICT)-based systems creating a close coupling between cyber and physical components. In this paper, we explore the ICS cybersecurity landscape including: 1) the key principles and unique aspects of ICS operation; 2) a brief history of cyberattacks on ICS; 3) an overview of ICS security assessment; 4) a survey of “uniquely-ICS” testbeds that capture the interactions between the various layers of an ICS; and 5) current trends in ICS attacks and defenses.

Journal ArticleDOI
TL;DR: In this article, an ab initio calculation of the neutron distribution of the Neutron-rich nucleus Ca-48 is presented, and it is shown that the difference between the radii of the neutrons and proton distributions is significantly smaller than previously thought.
Abstract: What is the size of the atomic nucleus? This deceivably simple question is difficult to answer. Although the electric charge distributions in atomic nuclei were measured accurately already half a century ago, our knowledge of the distribution of neutrons is still deficient. In addition to constraining the size of atomic nuclei, the neutron distribution also impacts the number of nuclei that can exist and the size of neutron stars. We present an ab initio calculation of the neutron distribution of the neutron-rich nucleus Ca-48. We show that the neutron skin (difference between the radii of the neutron and proton distributions) is significantly smaller than previously thought. We also make predictions for the electric dipole polarizability and the weak form factor; both quantities that are at present targeted by precision measurements. Based on ab initio results for Ca-48, we provide a constraint on the size of a neutron star.

Journal ArticleDOI
TL;DR: In this article, the authors present the first measurements of the charge radii of 49,51,52Ca, obtained from laser spectroscopy experiments at ISOLDE, CERN.
Abstract: Despite being a complex many-body system, the atomic nucleus exhibits simple structures for certain ‘magic’ numbers of protons and neutrons. The calcium chain in particular is both unique and puzzling: evidence of doubly magic features are known in 40,48Ca, and recently suggested in two radioactive isotopes, 52,54Ca. Although many properties of experimentally known calcium isotopes have been successfully described by nuclear theory, it is still a challenge to predict the evolution of their charge radii. Here we present the first measurements of the charge radii of 49,51,52Ca, obtained from laser spectroscopy experiments at ISOLDE, CERN. The experimental results are complemented by state-of-the-art theoretical calculations. The large and unexpected increase of the size of the neutron-rich calcium isotopes beyond N = 28 challenges the doubly magic nature of 52Ca and opens new intriguing questions on the evolution of nuclear sizes away from stability, which are of importance for our understanding of neutron-rich atomic nuclei. Doubly magic atomic nuclei — having a magic number of both protons and neutrons — are very stable. Now, experiments revealing unexpectedly large charge radii for a series of Ca isotopes put the doubly magic nature of the 52Ca nucleus into question.

Journal ArticleDOI
03 Jun 2016-ACS Nano
TL;DR: The results show that the self-assembly of the LPK layer on top of an intact MAPI layer is accompanied by a reorganization of the perovskite interface, which leads to an enhancement of the open-circuit voltage and power conversion efficiency due to reduced recombination losses, as well as improved moisture stability in the resulting photovoltaic devices.
Abstract: Recently developed organic–inorganic hybrid perovskite solar cells combine low-cost fabrication and high power conversion efficiency. Advances in perovskite film optimization have led to an outstanding power conversion efficiency of more than 20%. Looking forward, shifting the focus toward new device architectures holds great potential to induce the next leap in device performance. Here, we demonstrate a perovskite/perovskite heterojunction solar cell. We developed a facile solution-based cation infiltration process to deposit layered perovskite (LPK) structures onto methylammonium lead iodide (MAPI) films. Grazing-incidence wide-angle X-ray scattering experiments were performed to gain insights into the crystallite orientation and the formation process of the perovskite bilayer. Our results show that the self-assembly of the LPK layer on top of an intact MAPI layer is accompanied by a reorganization of the perovskite interface. This leads to an enhancement of the open-circuit voltage and power conversion...

Journal ArticleDOI
TL;DR: In this paper, the authors used waveform modeling to determine the equation of state at supranuclear densities inside neutron stars by measuring the radius of neutron stars with different masses to accuracies of a few percent.
Abstract: One of the primary science goals of the next generation of hard x-ray timing instruments is to determine the equation of state of matter at supranuclear densities inside neutron stars by measuring the radius of neutron stars with different masses to accuracies of a few percent. Three main techniques can be used to achieve this goal. The first involves waveform modeling. The flux observed from a hotspot on the neutron star surface offset from the rotational pole will be modulated by the star’s rotation, and this periodic modulation at the spin frequency is called a pulsation. As the photons propagate through the curved spacetime of the star, information about mass and radius is encoded into the shape of the waveform (pulse profile) via special and general-relativistic effects. Using pulsations from known sources (which have hotspots that develop either during thermonuclear bursts or due to channeled accretion) it is possible to obtain tight constraints on mass and radius. The second technique involves characterizing the spin distribution of accreting neutron stars. A large collecting area enables highly sensitive searches for weak or intermittent pulsations (which yield spin) from the many accreting neutron stars whose spin rates are not yet known. The most rapidly rotating stars provide a clean constraint, since the limiting spin rate where the equatorial surface velocity is comparable to the local orbital velocity, at which mass shedding occurs, is a function of mass and radius. However, the overall spin distribution also provides a guide to the torque mechanisms in operation and the moment of inertia, both of which can depend sensitively on dense matter physics. The third technique is to search for quasiperiodic oscillations in x-ray flux associated with global seismic vibrations of magnetars (the most highly magnetized neutron stars), triggered by magnetic explosions. The vibrational frequencies depend on stellar parameters including the dense matter equation of state, and large-area x-ray timing instruments would provide much improved detection capability. An illustration is given of how these complementary x-ray timing techniques can be used to constrain the dense matter equation of state and the results that might be expected from a 10 m2 instrument are discussed. Also discussed are how the results from such a facility would compare to other astronomical investigations of neutron star properties.

Journal ArticleDOI
TL;DR: None of the proposed scenarios is able to predict all the observations for supercooled and glassy bulk water, indicating that either the structural and dynamical alterations of confined water are too severe to make predictions for bulk water or the differences in how the studied water has been prepared are too large for direct and quantitative comparisons.
Abstract: Water in confined geometries has obvious relevance in biology, geology, and other areas where the material properties are strongly dependent on the amount and behavior of water in these types of materials. Another reason to restrict the size of water domains by different types of geometrical confinements has been the possibility to study the structural and dynamical behavior of water in the deeply supercooled regime (e.g., 150-230 K at ambient pressure), where bulk water immediately crystallizes to ice. In this paper we give a short review of studies with this particular goal. However, from these studies it is also clear that the interpretations of the experimental data are far from evident. Therefore, we present three main interpretations to explain the experimental data, and we discuss their advantages and disadvantages. Unfortunately, none of the proposed scenarios is able to predict all the observations for supercooled and glassy bulk water, indicating that either the structural and dynamical alterations of confined water are too severe to make predictions for bulk water or the differences in how the studied water has been prepared (applied cooling rate, resulting density of the water, etc.) are too large for direct and quantitative comparisons.

Journal ArticleDOI
TL;DR: It is shown that applying the target normal sheath acceleration mechanism with submicrometer thick targets is a very robust way to achieve such high ion energies and particle fluxes.
Abstract: We present a study of laser-driven ion acceleration with micrometer and submicrometer thick plastic targets. Using laser pulses with high temporal contrast and an intensity of the order of 10^{20} W/cm^{2} we observe proton beams with cutoff energies in excess of 85 MeV and particle numbers of 10^{9} in an energy bin of 1 MeV around this maximum. We show that applying the target normal sheath acceleration mechanism with submicrometer thick targets is a very robust way to achieve such high ion energies and particle fluxes. Our results are backed with 2D particle in cell simulations furthermore predicting cutoff energies above 200 MeV for acceleration based on relativistic transparency. This predicted regime can be probed after a few technically feasible adjustments of the laser and target parameters.

Journal ArticleDOI
TL;DR: In this article, the authors report the observation of antibunching in the light emitted from an electrically driven carbon nanotube embedded within a photonic quantum circuit, which is a promising nanoscale single-photon emitters for hybrid quantum photonic devices.
Abstract: Photonic quantum technologies allow quantum phenomena to be exploited in applications such as quantum cryptography, quantum simulation and quantum computation. A key requirement for practical devices is the scalable integration of single-photon sources, detectors and linear optical elements on a common platform. Nanophotonic circuits enable the realization of complex linear optical systems, while non-classical light can be measured with waveguide-integrated detectors. However, reproducible single-photon sources with high brightness and compatibility with photonic devices remain elusive for fully integrated systems. Here, we report the observation of antibunching in the light emitted from an electrically driven carbon nanotube embedded within a photonic quantum circuit. Non-classical light generated on chip is recorded under cryogenic conditions with waveguide-integrated superconducting single-photon detectors, without requiring optical filtering. Because exclusively scalable fabrication and deposition methods are used, our results establish carbon nanotubes as promising nanoscale single-photon emitters for hybrid quantum photonic devices. Single photons are generated from electrically driven semiconducting single-walled carbon nanotubes embedded in a photonic circuit. Pronounced antibunching is observed when photon correlation is measured at cryogenic temperatures.

Proceedings ArticleDOI
10 Mar 2016
TL;DR: Both commercial and industrial IoT devices are used as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified to help better understand the security vulnerabilities of existing IoT devices.
Abstract: The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.

Journal ArticleDOI
TL;DR: In this article, a unified general analysis of both ORR and OER was provided, and it was shown that control over at least two independent binding energies is required to obtain a reversible perfect catalyst for ORR.

Journal ArticleDOI
TL;DR: The In-Medium Similarity Renormalization Group (IM-SRG) as mentioned in this paper employs a continuous unitary transformation of the manybody Hamiltonian to decouple the ground state from all excitations, thereby solving the many-body problem.

Journal ArticleDOI
TL;DR: A novel taxonomy is introduced to study Named Data Networking features in depth and identifies a set of open challenges which should be addressed by researchers in due course.

Journal ArticleDOI
TL;DR: In this article, the authors consider the uncertainties in the radioactivity of the radioactive decay products and calculate time-dependent thermalization efficiencies for each particle type, which can be used to improve light curve models.
Abstract: One promising electromagnetic signature of compact object mergers are kilonovae: approximately isotropic radioactively powered transients that peak days to weeks post-merger. Key uncertainties in kilonova modeling include the emission profiles of the radioactive decay products—non-thermal $\beta $-particles, $\alpha $-particles, fission fragments, and $\gamma $-rays—and the efficiency with which their kinetic energy is absorbed by the ejecta. The radioactive energy emitted, along with its thermalization efficiency, sets the luminosity budget and is therefore crucial for predicting kilonova light curves. We outline uncertainties in the radioactivity, describe the processes by which the decay products transfer energy to the ejecta, and calculate time-dependent thermalization efficiencies for each particle type. We determine the net thermalization efficiency and explore its dependence on r-process yields—in particular, the production of $\alpha $-decaying translead nuclei—and on ejecta mass, velocity, and magnetic fields. We incorporate our results into detailed radiation transport simulations, and calculate updated kilonova light curve predictions. Thermalization effects reduce kilonova luminosities by a factor of roughly 2 at peak, and by an order of magnitude at later times (15 days or more after explosion). We present analytic fits to time-dependent thermalization efficiencies, which can be used to improve light curve models. We revisit the putative kilonova that accompanied gamma-ray burst 130603B, and estimate the mass ejected in that event. We find later time kilonova light curves can be significantly impacted by $\alpha $-decay from translead isotopes, data at these times may therefore be diagnostic of ejecta abundances.

Journal ArticleDOI
TL;DR: This work discusses the fundamental aspects that can contribute to thermal hysteresis and the strategies that are developing to at least partially overcome the hysteResis problem in some selected classes of magnetocaloric materials with large application potential.
Abstract: Hysteresis is more than just an interesting oddity that occurs in materials with a first-order transition. It is a real obstacle on the path from existing laboratory-scale prototypes of magnetic refrigerators towards commercialization of this potentially disruptive cooling technology. Indeed, the reversibility of the magnetocaloric effect, being essential for magnetic heat pumps, strongly depends on the width of the thermal hysteresis and, therefore, it is necessary to understand the mechanisms causing hysteresis and to find solutions to minimize losses associated with thermal hysteresis in order to maximize the efficiency of magnetic cooling devices. In this work, we discuss the fundamental aspects that can contribute to thermal hysteresis and the strategies that we are developing to at least partially overcome the hysteresis problem in some selected classes of magnetocaloric materials with large application potential. In doing so, we refer to the most relevant classes of magnetic refrigerants La–Fe–Si-, Heusler- and Fe 2 P-type compounds. This article is part of the themed issue ‘Taking the temperature of phase transitions in cool materials’.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: This work annotates a large datasets of 16k pairs of arguments over 32 topics and investigates whether the relation “A is more convincing than B” exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data.
Abstract: We propose a new task in the field of computational argumentation in which we investigate qualitative properties of Web arguments, namely their convincingness. We cast the problem as relation classification, where a pair of arguments having the same stance to the same prompt is judged. We annotate a large datasets of 16k pairs of arguments over 32 topics and investigate whether the relation "A is more convincing than B" exhibits properties of total ordering; these findings are used as global constraints for cleaning the crowdsourced data. We propose two tasks: (1) predicting which argument from an argument pair is more convincing and (2) ranking all arguments to the topic based on their convincingness. We experiment with feature-rich SVM and bidirectional LSTM and obtain 0.76-0.78 accuracy and 0.35-0.40 Spearman's correlation in a cross-topic evaluation. We release the newly created corpus UKPConvArg1 and the experimental software under open licenses.

Proceedings ArticleDOI
24 Jul 2016
TL;DR: Manifold Gaussian Processes is a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space, which allows to learn data representations, which are useful for the overall regression task.
Abstract: Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness assumptions on the structure of the function to be modeled. To model complex and non-differentiable functions, these smoothness assumptions are often too restrictive. One way to alleviate this limitation is to find a different representation of the data by introducing a feature space. This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task. In this paper, we propose Manifold Gaussian Processes, a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space. The Manifold GP is a full GP and allows to learn data representations, which are useful for the overall regression task. As a proof-of-concept, we evaluate our approach on complex non-smooth functions where standard GPs perform poorly, such as step functions and robotics tasks with contacts.

Journal ArticleDOI
TL;DR: Different choices of local 3N-operator structures are investigated and it is found that chiral interactions at N(2)LO are able to simultaneously reproduce the properties of A=3,4,5 systems and of neutron matter, in contrast to commonly used phenomenological 3N interactions.
Abstract: We present quantum Monte Carlo calculations of light nuclei, neutron-$\ensuremath{\alpha}$ scattering, and neutron matter using local two- and three-nucleon ($3N$) interactions derived from chiral effective field theory up to next-to-next-to-leading order (${\mathrm{N}}^{2}\mathrm{LO}$). The two undetermined $3N$ low-energy couplings are fit to the $^{4}\mathrm{He}$ binding energy and, for the first time, to the spin-orbit splitting in the neutron-$\ensuremath{\alpha}$ $P$-wave phase shifts. Furthermore, we investigate different choices of local $3N$-operator structures and find that chiral interactions at ${\mathrm{N}}^{2}\mathrm{LO}$ are able to simultaneously reproduce the properties of $A=3,4,5$ systems and of neutron matter, in contrast to commonly used phenomenological $3N$ interactions.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: Control-FLOW ATtestation (C-FLAT) as mentioned in this paper enables remote attestation of an application's control-flow path, without requiring the source code, which is a crucial security service particularly relevant to increasingly popular IoT (and other embedded) devices.
Abstract: Remote attestation is a crucial security service particularly relevant to increasingly popular IoT (and other embedded) devices. It allows a trusted party (verifier) to learn the state of a remote, and potentially malware-infected, device (prover). Most existing approaches are static in nature and only check whether benign software is initially loaded on the prover. However, they are vulnerable to runtime attacks that hijack the application's control or data flow, e.g., via return-oriented programming or data-oriented exploits. As a concrete step towards more comprehensive runtime remote attestation, we present the design and implementation of Control-FLow ATtestation (C-FLAT) that enables remote attestation of an application's control-flow path, without requiring the source code. We describe a full prototype implementation of C-FLAT on Raspberry Pi using its ARM TrustZone hardware security extensions. We evaluate C-FLAT's performance using a real-world embedded (cyber-physical) application, and demonstrate its efficacy against control-flow hijacking attacks.