scispace - formally typeset
Search or ask a question

Showing papers by "University of Stuttgart published in 2017"


Journal ArticleDOI
TL;DR: The SSP narratives as discussed by the authors is a set of five qualitative descriptions of future changes in demographics, human development, economy and lifestyle, policies and institutions, technology, and environment and natural resources, which can serve as a basis for integrated scenarios of emissions and land use, as well as climate impact, adaptation and vulnerability analyses.
Abstract: Long-term scenarios play an important role in research on global environmental change. The climate change research community is developing new scenarios integrating future changes in climate and society to investigate climate impacts as well as options for mitigation and adaptation. One component of these new scenarios is a set of alternative futures of societal development known as the shared socioeconomic pathways (SSPs). The conceptual framework for the design and use of the SSPs calls for the development of global pathways describing the future evolution of key aspects of society that would together imply a range of challenges for mitigating and adapting to climate change. Here we present one component of these pathways: the SSP narratives, a set of five qualitative descriptions of future changes in demographics, human development, economy and lifestyle, policies and institutions, technology, and environment and natural resources. We describe the methods used to develop the narratives as well as how these pathways are hypothesized to produce particular combinations of challenges to mitigation and adaptation. Development of the narratives drew on expert opinion to (1) identify key determinants of these challenges that were essential to incorporate in the narratives and (2) combine these elements in the narratives in a manner consistent with scholarship on their inter-relationships. The narratives are intended as a description of plausible future conditions at the level of large world regions that can serve as a basis for integrated scenarios of emissions and land use, as well as climate impact, adaptation and vulnerability analyses.

1,606 citations


Journal ArticleDOI
10 Jan 2017-mAbs
TL;DR: There is not ‘one best format’ for generating bispecific antibodies, and no single format is suitable for all, or even most of, the desired applications, but the bispespecific formats collectively serve as a valuable source of diversity that can be applied to the development of therapeutics for various indications.
Abstract: During the past two decades we have seen a phenomenal evolution of bispecific antibodies for therapeutic applications. The ‘zoo’ of bispecific antibodies is populated by many different species, comprising around 100 different formats, including small molecules composed solely of the antigen-binding sites of two antibodies, molecules with an IgG structure, and large complex molecules composed of different antigen-binding moieties often combined with dimerization modules. The application of sophisticated molecular design and genetic engineering has solved many of the technical problems associated with the formation of bispecific antibodies such as stability, solubility and other parameters that confer drug properties. These parameters may be summarized under the term ‘developability’. In addition, different ‘target product profiles’, i.e., desired features of the bispecific antibody to be generated, mandates the need for access to a diverse panel of formats. These may vary in size, arrangement, vale...

591 citations


Journal ArticleDOI
TL;DR: The mechanism proposed by Jackeli and Khaliullin to identify Kitaev materials based on spin-orbital dependent bond interactions is analyzed and a comprehensive overview of its implications in real materials is provided.
Abstract: The exactly solvable Kitaev model on the honeycomb lattice has recently received enormous attention linked to the hope of achieving novel spin-liquid states with fractionalized Majorana-like excitations. In this review, we analyze the mechanism proposed by Jackeli and Khaliullin to identify Kitaev materials based on spin-orbital dependent bond interactions and provide a comprehensive overview of its implications in real materials. We set the focus on experimental results and current theoretical understanding of planar honeycomb systems (Na2IrO3, α-Li2IrO3, and α-RuCl3), three-dimensional Kitaev materials (β- and γ-Li2IrO3), and other potential candidates, completing the review with the list of open questions awaiting new insights.

479 citations


Journal ArticleDOI
27 Oct 2017-Science
TL;DR: Inspired by cuticles of marine mussel byssi, a sacrificial, reversible iron-catechol cross-links into a dry, loosely cross-linked epoxy network exhibits two to three orders of magnitude increases in stiffness, tensile strength, and tensile toughness compared to its iron-free precursor while gaining recoverable hysteretic energy dissipation and maintaining its original extensibility.
Abstract: Materials often exhibit a trade-off between stiffness and extensibility; for example, strengthening elastomers by increasing their cross-link density leads to embrittlement and decreased toughness. Inspired by cuticles of marine mussel byssi, we circumvent this inherent trade-off by incorporating sacrificial, reversible iron-catechol cross-links into a dry, loosely cross-linked epoxy network. The iron-containing network exhibits two to three orders of magnitude increases in stiffness, tensile strength, and tensile toughness compared to its iron-free precursor while gaining recoverable hysteretic energy dissipation and maintaining its original extensibility. Compared to previous realizations of this chemistry in hydrogels, the dry nature of the network enables larger property enhancement owing to the cooperative effects of both the increased cross-link density given by the reversible iron-catecholate complexes and the chain-restricting ionomeric nanodomains that they form.

442 citations


Journal ArticleDOI
TL;DR: First applications such as the detection of proteins, the monitoring of dynamic processes, and hyperspectral infrared chemical imaging are discussed, demonstrating the sensitivity and broad applicability of resonant SEIRA.
Abstract: Infrared spectroscopy is a powerful tool widely used in research and industry for a label-free and unambiguous identification of molecular species. Inconveniently, its application to spectroscopic analysis of minute amounts of materials, for example, in sensing applications, is hampered by the low infrared absorption cross-sections. Surface-enhanced infrared spectroscopy using resonant metal nanoantennas, or short “resonant SEIRA”, overcomes this limitation. Resonantly excited, such metal nanostructures feature collective oscillations of electrons (plasmons), providing huge electromagnetic fields on the nanometer scale. Infrared vibrations of molecules located in these fields are enhanced by orders of magnitude enabling a spectroscopic characterization with unprecedented sensitivity. In this Review, we introduce the concept of resonant SEIRA and discuss the underlying physics, particularly, the resonant coupling between molecular and antenna excitations as well as the spatial extent of the enhancement and...

431 citations


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo approach is proposed to improve the accuracy of SfM-based DEMs and minimise the associated field effort by robust determination of suitable lower-density deployments of ground control.

421 citations


Journal ArticleDOI
TL;DR: This work presents a novel design concept for highly integrated active optical components that employs a combination of resonant plasmonic metasurfaces and the phase-change material Ge3Sb2Te6, and demonstrates beam switching and bifocal lensing.
Abstract: Compact nanophotonic elements exhibiting adaptable properties are essential components for the miniaturization of powerful optical technologies such as adaptive optics and spatial light modulators. While the larger counterparts typically rely on mechanical actuation, this can be undesirable in some cases on a microscopic scale due to inherent space restrictions. Here, we present a novel design concept for highly integrated active optical components that employs a combination of resonant plasmonic metasurfaces and the phase-change material Ge3Sb2Te6. In particular, we demonstrate beam switching and bifocal lensing, thus, paving the way for a plethora of active optical elements employing plasmonic metasurfaces, which follow the same design principles.

313 citations


Journal ArticleDOI
07 Jul 2017-Science
TL;DR: This work combines the use of a quantum memory and high magnetic fields with a dedicated quantum sensor based on nitrogen vacancy centers in diamond to achieve chemical shift resolution in 1H and 19F NMR spectroscopy of 20-zeptoliter sample volumes.
Abstract: Nuclear magnetic resonance (NMR) spectroscopy is a key analytical technique in chemistry, biology, and medicine However, conventional NMR spectroscopy requires an at least nanoliter-sized sample volume to achieve sufficient signal We combined the use of a quantum memory and high magnetic fields with a dedicated quantum sensor based on nitrogen vacancy centers in diamond to achieve chemical shift resolution in 1H and 19F NMR spectroscopy of 20-zeptoliter sample volumes We demonstrate the application of NMR pulse sequences to achieve homonuclear decoupling and spin diffusion measurements The best measured NMR linewidth of a liquid sample was ~1 part per million, mainly limited by molecular diffusion To mitigate the influence of diffusion, we performed high-resolution solid-state NMR by applying homonuclear decoupling and achieved a 20-fold narrowing of the NMR linewidth

292 citations


Journal ArticleDOI
TL;DR: This work shows relationships between low- and high-frequency components of the NAO and masting in two European tree species across multiple decades, and supports the connection between proximate and ultimate causes of masting.
Abstract: Climate teleconnections drive highly variable and synchronous seed production (masting) over large scales. Disentangling the effect of high-frequency (inter-annual variation) from low-frequency (decadal trends) components of climate oscillations will improve our understanding of masting as an ecosystem process. Using century-long observations on masting (the MASTREE database) and data on the Northern Atlantic Oscillation (NAO), we show that in the last 60 years both high-frequency summer and spring NAO, and low-frequency winter NAO components are highly correlated to continent-wide masting in European beech and Norway spruce. Relationships are weaker (non-stationary) in the early twentieth century. This finding improves our understanding on how climate variation affects large-scale synchronization of tree masting. Moreover, it supports the connection between proximate and ultimate causes of masting: indeed, large-scale features of atmospheric circulation coherently drive cues and resources for masting, as well as its evolutionary drivers, such as pollination efficiency, abundance of seed dispersers, and natural disturbance regimes.

278 citations


Journal ArticleDOI
TL;DR: A hierarchical taxonomy of techniques is derived by systematically categorizing and tagging publications and identifying the representation of time as the major distinguishing feature for dynamic graph visualizations.
Abstract: Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.

276 citations


Posted Content
TL;DR: The metric normalized validation error (NVE) is introduced in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.
Abstract: We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.

Proceedings ArticleDOI
22 Mar 2017
TL;DR: In this paper, the authors revisited the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes, and showed that neural networks can learn a form of decoding algorithm, rather than only a simple classifier.
Abstract: We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.

Journal ArticleDOI
TL;DR: This work systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques, and proposes a “human in the loop” process model that provides a general lens for the evaluation of visual interactive DR systems.
Abstract: Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities.

Journal ArticleDOI
17 Mar 2017-Science
TL;DR: The detailed spatiotemporal evolution of nanovortices is shown using time-resolved two-photon photoemission electron microscopy and the angular velocity of the vortex is measured to directly extract the OAM magnitude of light.
Abstract: The ability of light to carry and deliver orbital angular momentum (OAM) in the form of optical vortices has attracted much interest. The physical properties of light with a helical wavefront can be confined onto two-dimensional surfaces with subwavelength dimensions in the form of plasmonic vortices, opening avenues for thus far unknown light-matter interactions. Because of their extreme rotational velocity, the ultrafast dynamics of such vortices remained unexplored. Here we show the detailed spatiotemporal evolution of nanovortices using time-resolved two-photon photoemission electron microscopy. We observe both long- and short-range plasmonic vortices confined to deep subwavelength dimensions on the scale of 100 nanometers with nanometer spatial resolution and subfemtosecond time-step resolution. Finally, by measuring the angular velocity of the vortex, we directly extract the OAM magnitude of light.

Journal ArticleDOI
01 Dec 2017
TL;DR: This survey provides an overview of the research field ofprovenance, focusing on what provenance is used for, what types of provenance have been defined and captured for the different applications, and which resources and system requirements impact the choice of deploying a particular provenance solution.
Abstract: Provenance refers to any information describing the production process of an end product, which can be anything from a piece of digital data to a physical object. While this survey focuses on the former type of end product, this definition still leaves room for many different interpretations of and approaches to provenance. These are typically motivated by different application domains for provenance (e.g., accountability, reproducibility, process debugging) and varying technical requirements such as runtime, scalability, or privacy. As a result, we observe a wide variety of provenance types and provenance-generating methods. This survey provides an overview of the research field of provenance, focusing on what provenance is used for (what for?), what types of provenance have been defined and captured for the different applications (what form?), and which resources and system requirements impact the choice of deploying a particular provenance solution (what from?). For each of these three key questions, we provide a classification and review the state of the art for each class. We conclude with a summary and possible future research challenges.

Journal ArticleDOI
TL;DR: The reasons for concern framework as mentioned in this paper has been widely used in the literature to assess risks in relation to varying levels of climate change, and is a cornerstone of the most recent IPCC assessments.
Abstract: The reasons for concern framework communicates scientific understanding about risks in relation to varying levels of climate change. The framework, now a cornerstone of the IPCC assessments, aggregates global risks into five categories as a function of global mean temperature change. We review the framework's conceptual basis and the risk judgments made in the most recent IPCC report, confirming those judgments in most cases in the light of more recent literature and identifying their limitations. We point to extensions of the framework that offer complementary climate change metrics to global mean temperature change and better account for possible changes in social and ecological system vulnerability. Further research should systematically evaluate risks under alternative scenarios of future climatic and societal conditions.

Journal ArticleDOI
TL;DR: A broad consensus has been reached in the astrochemistry community on how to suitably treat gas-phase processes in models, and also how to present the necessary reaction data in databases; however, no such consensus has yet been reached for grain-surface processes.
Abstract: The cross-disciplinary field of astrochemistry exists to understand the formation, destruction, and survival of molecules in astrophysical environments. Molecules in space are synthesized via a large variety of gas-phase reactions, and reactions on dust-grain surfaces, where the surface acts as a catalyst. A broad consensus has been reached in the astrochemistry community on how to suitably treat gas-phase processes in models, and also on how to present the necessary reaction data in databases; however, no such consensus has yet been reached for grain-surface processes. A team of ${\sim}25$ experts covering observational, laboratory and theoretical (astro)chemistry met in summer of 2014 at the Lorentz Center in Leiden with the aim to provide solutions for this problem and to review the current state-of-the-art of grain surface models, both in terms of technical implementation into models as well as the most up-to-date information available from experiments and chemical computations. This review builds on the results of this workshop and gives an outlook for future directions.

Journal ArticleDOI
18 May 2017-Nature
TL;DR: This work studies a pair of tunnel-coupled one-dimensional atomic superfluids and characterize the corresponding quantum many-body problem by measuring correlation functions and concludes that in thermal equilibrium this system can be seen as a quantum simulator of the sine-Gordon model, relevant for diverse disciplines ranging from particle physics to condensed matter.
Abstract: Quantum systems can be characterized by their correlations Higher-order (larger than second order) correlations, and the ways in which they can be decomposed into correlations of lower order, provide important information about the system, its structure, its interactions and its complexity The measurement of such correlation functions is therefore an essential tool for reading, verifying and characterizing quantum simulations Although higher-order correlation functions are frequently used in theoretical calculations, so far mainly correlations up to second order have been studied experimentally Here we study a pair of tunnel-coupled one-dimensional atomic superfluids and characterize the corresponding quantum many-body problem by measuring correlation functions We extract phase correlation functions up to tenth order from interference patterns and analyse whether, and under what conditions, these functions factorize into correlations of lower order This analysis characterizes the essential features of our system, the relevant quasiparticles, their interactions and topologically distinct vacua From our data we conclude that in thermal equilibrium our system can be seen as a quantum simulator of the sine-Gordon model, relevant for diverse disciplines ranging from particle physics to condensed matter The measurement and evaluation of higher-order correlation functions can easily be generalized to other systems and to study correlations of any other observable such as density, spin and magnetization It therefore represents a general method for analysing quantum many-body systems from experimental data

Journal ArticleDOI
TL;DR: In this paper, the authors performed three variations of decomposition analyses on driving forces of carbon emissions from 2003 to 2014 due to energy consumption of the industry, driving forces for carbon intensity of the electricity generation, and key drivers of CO2 emissions due to total fossil fuel combustion.

Journal ArticleDOI
TL;DR: This work presents a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor.
Abstract: We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is <200 μm. Because the four lenses are printed in one single step without the necessity for any further assembling or alignment, this approach allows for fast design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security.

Journal ArticleDOI
TL;DR: These new programmable epigenetic editors allow unprecedented control of the DNA methylation status in cells and will lead to further advances in the understanding of epigenetic signaling.
Abstract: DNA methylation plays a critical role in the regulation and maintenance of cell-type specific transcriptional programs. Targeted epigenome editing is an emerging technology to specifically regulate cellular gene expression in order to modulate cell phenotypes or dissect the epigenetic mechanisms involved in their control. In this work, we employed a DNA methyltransferase Dnmt3a–Dnmt3L construct fused to the nuclease-inactivated dCas9 programmable targeting domain to introduce DNA methylation into the human genome specifically at the EpCAM, CXCR4 and TFRC gene promoters. We show that targeting of these loci with single gRNAs leads to efficient and widespread methylation of the promoters. Multiplexing of several guide RNAs does not increase the efficiency of methylation. Peaks of targeted methylation were observed around 25 bp upstream and 40 bp downstream of the PAM site, while 20–30 bp of the binding site itself are protected against methylation. Potent methylation is dependent on the multimerization of Dnmt3a/Dnmt3L complexes on the DNA. Furthermore, the introduced methylation causes transcriptional repression of the targeted genes. These new programmable epigenetic editors allow unprecedented control of the DNA methylation status in cells and will lead to further advances in the understanding of epigenetic signaling.

Journal ArticleDOI
TL;DR: It is demonstrated that the chemotactic behavior of these nanoswimmers, in combination with LRP-1 (low-density lipoprotein receptor–related protein 1) targeting, enables a fourfold increase in penetration to the brain compared to nonchemotactic systems.
Abstract: In recent years, scientists have created artificial microscopic and nanoscopic self-propelling particles, often referred to as nano- or microswimmers, capable of mimicking biological locomotion and taxis. This active diffusion enables the engineering of complex operations that so far have not been possible at the micro- and nanoscale. One of the most promising tasks is the ability to engineer nanocarriers that can autonomously navigate within tissues and organs, accessing nearly every site of the human body guided by endogenous chemical gradients. We report a fully synthetic, organic, nanoscopic system that exhibits attractive chemotaxis driven by enzymatic conversion of glucose. We achieve this by encapsulating glucose oxidase alone or in combination with catalase into nanoscopic and biocompatible asymmetric polymer vesicles (known as polymersomes). We show that these vesicles self-propel in response to an external gradient of glucose by inducing a slip velocity on their surface, which makes them move in an extremely sensitive way toward higher-concentration regions. We finally demonstrate that the chemotactic behavior of these nanoswimmers, in combination with LRP-1 (low-density lipoprotein receptor-related protein 1) targeting, enables a fourfold increase in penetration to the brain compared to nonchemotactic systems.

Journal ArticleDOI
TL;DR: The crystallization of passive silica colloids into well-controlled 2D assemblies is shown, which is directed by a small number of self-propelled active colloids, which offers the possibility of obtaining structures and assemblies that cannot be found in equilibrium systems.
Abstract: The collective phenomena exhibited by artificial active matter systems present novel routes to fabricating out-of-equilibrium microscale assemblies. Here, the crystallization of passive silica colloids into well-controlled 2D assemblies is shown, which is directed by a small number of self-propelled active colloids. The active colloids are titania-silica Janus particles that are propelled when illuminated by UV light. The strength of the attractive interaction and thus the extent of the assembled clusters can be regulated by the light intensity. A remarkably small number of the active colloids is sufficient to induce the assembly of the dynamic crystals. The approach produces rationally designed colloidal clusters and crystals with controllable sizes, shapes, and symmetries. This multicomponent active matter system offers the possibility of obtaining structures and assemblies that cannot be found in equilibrium systems.

Journal ArticleDOI
TL;DR: In this paper, structural tensors are employed to describe transverse isotropy, orthotropy and cubic anisotropy in fracture phase field models, and the authors demonstrate the performance of the proposed anisotropic fracture model by means of representative numerical examples at small and large deformations.
Abstract: A phase field model of fracture that accounts for anisotropic material behavior at small and large deformations is outlined within this work. Most existing fracture phase field models assume crack evolution within isotropic solids, which is not a meaningful assumption for many natural as well as engineered materials that exhibit orientation-dependent behavior. The incorporation of anisotropy into fracture phase field models is for example necessary to properly describe the typical sawtooth crack patterns in strongly anisotropic materials. In the present contribution, anisotropy is incorporated in fracture phase field models in several ways: (i) Within a pure geometrical approach, the crack surface density function is adopted by a rigorous application of the theory of tensor invariants leading to the definition of structural tensors of second and fourth order. In this work we employ structural tensors to describe transverse isotropy, orthotropy and cubic anisotropy. Latter makes the incorporation of second gradients of the crack phase field necessary, which is treated within the finite element context by a nonconforming Morley triangle. Practically, such a geometric approach manifests itself in the definition of anisotropic effective fracture length scales. (ii) By use of structural tensors, energetic and stress-like failure criteria are modified to account for inherent anisotropies. These failure criteria influence the crack driving force, which enters the crack phase field evolution equation and allows to set up a modular structure. We demonstrate the performance of the proposed anisotropic fracture phase field model by means of representative numerical examples at small and large deformations.

Journal ArticleDOI
TL;DR: Mapping out the challenges and strategies for the widespread adoption of service computing shows clear trends in adoption and a clear road map for the future direction is proposed.
Abstract: Mapping out the challenges and strategies for the widespread adoption of service computing.

Journal ArticleDOI
21 Dec 2017-Entropy
TL;DR: An abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy is considered, in a similar fashion to thermodynamic systems undergoing isothermal transformations.
Abstract: Living organisms from single cells to humans need to adapt continuously to respond to changes in their environment. The process of behavioural adaptation can be thought of as improving decision-making performance according to some utility function. Here, we consider an abstract model of organisms as decision-makers with limited information-processing resources that trade off between maximization of utility and computational costs measured by a relative entropy, in a similar fashion to thermodynamic systems undergoing isothermal transformations. Such systems minimize the free energy to reach equilibrium states that balance internal energy and entropic cost. When there is a fast change in the environment, these systems evolve in a non-equilibrium fashion because they are unable to follow the path of equilibrium distributions. Here, we apply concepts from non-equilibrium thermodynamics to characterize decision-makers that adapt to changing environments under the assumption that the temporal evolution of the utility function is externally driven and does not depend on the decision-maker's action. This allows one to quantify performance loss due to imperfect adaptation in a general manner and, additionally, to find relations for decision-making similar to Crooks' fluctuation theorem and Jarzynski's equality. We provide simulations of several exemplary decision and inference problems in the discrete and continuous domains to illustrate the new relations.

Journal ArticleDOI
TL;DR: The dome-like behaviour around EG∼0 combined with the transport, thermodynamic and optical results are fully consistent with an excitonic insulator phase in Ta2NiSe5, and the entropy associated with the transition being consistent with a primarily electronic origin.
Abstract: This work was partially supported by Grant-in-Aid for Scientific Research (S; grant no. 24224010) and by Grant-in-Aid for Scientific Research on Innovative Areas (grant no. JP15H05852). H.T. acknowledges support from the Alexander von Humboldt foundation.

Journal ArticleDOI
TL;DR: It is shown that this relation holds not only for the long-time limit of fluctuations, as described by large deviation theory, but also for fluctuations on arbitrary finite time scales, which facilitates applying the thermodynamic uncertainty relation to single molecule experiments, for which infinite time scales are not accessible.
Abstract: For fluctuating currents in nonequilibrium steady states, the recently discovered thermodynamic uncertainty relation expresses a fundamental relation between their variance and the overall entropic cost associated with the driving. We show that this relation holds not only for the long-time limit of fluctuations, as described by large deviation theory, but also for fluctuations on arbitrary finite time scales. This generalization facilitates applying the thermodynamic uncertainty relation to single molecule experiments, for which infinite time scales are not accessible. Importantly, often this finite-time variant of the relation allows inferring a bound on the entropy production that is even stronger than the one obtained from the long-time limit. We illustrate the relation for the fluctuating work that is performed by a stochastically switching laser tweezer on a trapped colloidal particle.

Proceedings ArticleDOI
22 Feb 2017
TL;DR: This work partitions the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block, and shows the degradation through partitioning and compares the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.
Abstract: The training complexity of deep learning-based channel decoders scales exponentially with the codebook size and therefore with the number of information bits. Thus, neural network decoding (NND) is currently only feasible for very short block lengths. In this work, we show that the conventional iterative decoding algorithm for polar codes can be enhanced when sub-blocks of the decoder are replaced by neural network (NN) based components. Thus, we partition the encoding graph into smaller sub-blocks and train them individually, closely approaching maximum a posteriori (MAP) performance per sub-block. These blocks are then connected via the remaining conventional belief propagation decoding stage(s). The resulting decoding algorithm is non-iterative and inherently enables a highlevel of parallelization, while showing a competitive bit error rate (BER) performance. We examine the degradation through partitioning and compare the resulting decoder to state-of-the art polar decoders such as successive cancellation list and belief propagation decoding.

Journal ArticleDOI
TL;DR: This survey provides an introduction into eye tracking visualization with an overview of existing techniques and identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.
Abstract: This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state-of-the-art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data. We classified these visualization techniques and identified two main categories: point-based methods and methods based on areas of interest. Additionally, we conducted an expert review asking leading eye tracking experts how they apply visualization techniques in their analysis of eye tracking data. Based on the experts' feedback, we identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.