scispace - formally typeset
Search or ask a question

Showing papers by "Eindhoven University of Technology published in 2017"


Journal ArticleDOI
TL;DR: Evidence for the buffering role of various job resources on the impact ofVarious job demands on burnout is provided and the future of the JD-R theory is looked at.
Abstract: The job demands-resources (JD-R) model was introduced in the international literature 15 years ago (Demerouti, Bakker, Nachreiner, & Schaufeli, 2001). The model has been applied in thousands of organizations and has inspired hundreds of empirical articles, including 1 of the most downloaded articles of the Journal of Occupational Health Psychology (Bakker, Demerouti, & Euwema, 2005). This article provides evidence for the buffering role of various job resources on the impact of various job demands on burnout. In the present article, we look back on the first 10 years of the JD-R model (2001-2010), and discuss how the model matured into JD-R theory (2011-2016). Moreover, we look at the future of the theory and outline which new issues in JD-R theory are worthwhile of investigation. We also discuss practical applications. It is our hope that JD-R theory will continue to inspire researchers and practitioners who want to promote employee well-being and effective organizational functioning. (PsycINFO Database Record

2,309 citations


Journal ArticleDOI
12 Dec 2017-JAMA
TL;DR: In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints.
Abstract: Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884];P Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.

2,116 citations


Journal ArticleDOI
TL;DR: This practical primer with accompanying spreadsheet and R package enables psychologists to easily perform equivalence tests (and power analyses) by setting equivalence bounds based on standardized effect sizes and provides recommendations to prespecify equivalence limits.
Abstract: Scientists should be able to provide support for the absence of a meaningful effect. Currently, researchers often incorrectly conclude an effect is absent based a nonsignificant result. A widely recommended approach within a frequentist framework is to test for equivalence. In equivalence tests, such as the two one-sided tests (TOST) procedure discussed in this article, an upper and lower equivalence bound is specified based on the smallest effect size of interest. The TOST procedure can be used to statistically reject the presence of effects large enough to be considered worthwhile. This practical primer with accompanying spreadsheet and R package enables psychologists to easily perform equivalence tests (and power analyses) by setting equivalence bounds based on standardized effect sizes and provides recommendations to prespecify equivalence bounds. Extending your statistical tool kit with equivalence tests is an easy way to improve your statistical and theoretical inferences.

1,027 citations


MonographDOI
01 Jan 2017
TL;DR: This chapter explains why many real-world networks are small worlds and have large fluctuations in their degrees, and why Probability theory offers a highly effective way to deal with the complexity of networks, and leads us to consider random graphs.
Abstract: This rigorous introduction to network science presents random graphs as models for real-world networks. Such networks have distinctive empirical properties and a wealth of new models have emerged to capture them. Classroom tested for over ten years, this text places recent advances in a unified framework to enable systematic study. Designed for a master's-level course, where students may only have a basic background in probability, the text covers such important preliminaries as convergence of random variables, probabilistic bounds, coupling, martingales, and branching processes. Building on this base - and motivated by many examples of real-world networks, including the Internet, collaboration networks, and the World Wide Web - it focuses on several important models for complex networks and investigates key properties, such as the connectivity of nodes. Numerous exercises allow students to develop intuition and experience in working with the models.

934 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the next generation of smart windows based on organic materials which can change their properties by reflecting or transmitting excess solar energy (infrared radiation) in such a way that comfortable indoor temperatures can be maintained throughout the year.
Abstract: Windows are vital elements in the built environment that have a large impact on the energy consumption in indoor spaces, affecting heating and cooling and artificial lighting requirements. Moreover, they play an important role in sustaining human health and well-being. In this review, we discuss the next generation of smart windows based on organic materials which can change their properties by reflecting or transmitting excess solar energy (infrared radiation) in such a way that comfortable indoor temperatures can be maintained throughout the year. Moreover, we place emphasis on windows that maintain transparency in the visible region so that additional energy is not required to retain natural illumination. We discuss a number of ways to fabricate windows which remain as permanent infrared control elements throughout the year as well as windows which can alter transmission properties in presence of external stimuli like electric fields, temperature and incident light intensity. We also show the potential impact of these windows on energy saving in different climate conditions.

877 citations


Journal ArticleDOI
TL;DR: This article reviews static and dynamic interfacial effects in magnetism, focusing on interfacially-driven magnetic effects and phenomena associated with spin-orbit coupling and intrinsic symmetry breaking at interfaces, identifying the most exciting new scientific results and pointing to promising future research directions.
Abstract: This article reviews static and dynamic interfacial effects in magnetism, focusing on interfacially-driven magnetic effects and phenomena associated with spin-orbit coupling and intrinsic symmetry breaking at interfaces. It provides a historical background and literature survey, but focuses on recent progress, identifying the most exciting new scientific results and pointing to promising future research directions. It starts with an introduction and overview of how basic magnetic properties are affected by interfaces, then turns to a discussion of charge and spin transport through and near interfaces and how these can be used to control the properties of the magnetic layer. Important concepts include spin accumulation, spin currents, spin transfer torque, and spin pumping. An overview is provided to the current state of knowledge and existing review literature on interfacial effects such as exchange bias, exchange spring magnets, spin Hall effect, oxide heterostructures, and topological insulators. The article highlights recent discoveries of interface-induced magnetism and non-collinear spin textures, non-linear dynamics including spin torque transfer and magnetization reversal induced by interfaces, and interfacial effects in ultrafast magnetization processes.

758 citations


Journal ArticleDOI
TL;DR: The 2017 plasmas roadmap as mentioned in this paper is the first update of a planned series of periodic updates of the Plasma Roadmap, which was published by the Journal of Physics D: Applied Physics in 2012.
Abstract: Journal of Physics D: Applied Physics published the first Plasma Roadmap in 2012 consisting of the individual perspectives of 16 leading experts in the various sub-fields of low temperature plasma science and technology. The 2017 Plasma Roadmap is the first update of a planned series of periodic updates of the Plasma Roadmap. The continuously growing interdisciplinary nature of the low temperature plasma field and its equally broad range of applications are making it increasingly difficult to identify major challenges that encompass all of the many sub-fields and applications. This intellectual diversity is ultimately a strength of the field. The current state of the art for the 19 sub-fields addressed in this roadmap demonstrates the enviable track record of the low temperature plasma field in the development of plasmas as an enabling technology for a vast range of technologies that underpin our modern society. At the same time, the many important scientific and technological challenges shared in this roadmap show that the path forward is not only scientifically rich but has the potential to make wide and far reaching contributions to many societal challenges.

677 citations


Journal ArticleDOI
29 Jun 2017-Nature
TL;DR: By incorporating azobenzene derivatives with fast cis-to-trans thermal relaxation into liquid-crystal networks, photoactive polymer films are generated that exhibit continuous, directional, macroscopic mechanical waves under constant light illumination, with a feedback loop that is driven by self-shadowing.
Abstract: Oscillating materials that adapt their shapes in response to external stimuli are of interest for emerging applications in medicine and robotics. For example, liquid-crystal networks can be programmed to undergo stimulus-induced deformations in various geometries, including in response to light. Azobenzene molecules are often incorporated into liquid-crystal polymer films to make them photoresponsive; however, in most cases only the bending responses of these films have been studied, and relaxation after photo-isomerization is rather slow. Modifying the core or adding substituents to the azobenzene moiety can lead to marked changes in photophysical and photochemical properties, providing an opportunity to circumvent the use of a complex set-up that involves multiple light sources, lenses or mirrors. Here, by incorporating azobenzene derivatives with fast cis-to-trans thermal relaxation into liquid-crystal networks, we generate photoactive polymer films that exhibit continuous, directional, macroscopic mechanical waves under constant light illumination, with a feedback loop that is driven by self-shadowing. We explain the mechanism of wave generation using a theoretical model and numerical simulations, which show good qualitative agreement with our experiments. We also demonstrate the potential application of our photoactive films in light-driven locomotion and self-cleaning surfaces, and anticipate further applications in fields such as photomechanical energy harvesting and miniaturized transport.

648 citations


Journal ArticleDOI
TL;DR: A mathematical model is introduced that incorporates the pertinent optical and physiological properties of skin reflections with the objective to increase the understanding of the algorithmic principles behind remote photoplethysmography (rPPG).
Abstract: This paper introduces a mathematical model that incorporates the pertinent optical and physiological properties of skin reflections with the objective to increase our understanding of the algorithmic principles behind remote photoplethysmography (rPPG). The model is used to explain the different choices that were made in existing rPPG methods for pulse extraction. The understanding that comes from the model can be used to design robust or application-specific rPPG solutions. We illustrate this by designing an alternative rPPG method, where a projection plane orthogonal to the skin tone is used for pulse extraction. A large benchmark on the various discussed rPPG methods shows that their relative merits can indeed be understood from the proposed model.

639 citations


Journal ArticleDOI
TL;DR: An overview to the Gland Segmentation in Colon Histology Images Challenge Contest (GlaS) held at MICCAI'2015 is provided, along with the method descriptions and evaluation results from the top performing methods.

574 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of the characteristics which define and differentiate the types of MIL problems is provided, providing insight on how the problem characteristics affect MIL algorithms, recommendations for future benchmarking and promising avenues for research.

Journal ArticleDOI
29 Jun 2017
TL;DR: In this paper, the authors define a set of attributes through a literature review, which is then used to describe selected mobility as a service (MaaS) schemes and existing applications, and examine the potential implications of the identified core characteristics of the service on the following three areas of transport practices.
Abstract: Mobility as a Service (MaaS) is a recent innovative transport concept, anticipated to induce significant changes in the current transport practices. However, there is ambiguity surrounding the concept; it is uncertain what are the core characteristics of MaaS and in which way they can be addressed. Further, there is a lack of an assessment framework to classify their unique characteristics in a systematic manner, even though several MaaS schemes have been implemented around the world. In this study, we define this set of attributes through a literature review, which is then used to describe selected MaaS schemes and existing applications. We also examine the potential implications of the identified core characteristics of the service on the following three areas of transport practices: travel demand modelling, a supply-side analysis, and designing business model. Finally, we propose the necessary enhancements needed to deliver such an innovative service like MaaS, by establishing the state of art in those fields.

Journal ArticleDOI
TL;DR: A new cluster-first route-second heuristic is proposed, in which a polynomial-size Clustering Problem simultaneously considers the service level feasibility and approximate routing costs and shows that it outperforms a pure mixed-integer programming formulation and a constraint programming approach.

Journal ArticleDOI
05 Apr 2017
TL;DR: The authors argue that the assumption of equal variances will seldom hold in psychological research, and choosing between Student's t-test and Welch's T-test based on the outcomes of a test of the equality of variances often fails to provide an appropriate answer.
Abstract: When comparing two independent groups, psychology researchers commonly use Student’s t-tests. Assumptions of normality and homogeneity of variance underlie this test. More often than not, when these conditions are not met, Student’s t-test can be severely biased and lead to invalid statistical inferences. Moreover, we argue that the assumption of equal variances will seldom hold in psychological research, and choosing between Student’s t-test and Welch’s t-test based on the outcomes of a test of the equality of variances often fails to provide an appropriate answer. We show that the Welch’s t-test provides a better control of Type 1 error rates when the assumption of homogeneity of variance is not met, and it loses little robustness compared to Student’s t-test when the assumptions are met. We argue that Welch’s t-test should be used as a default strategy.

Journal ArticleDOI
TL;DR: In this paper, a desktop 3D printer was used to print CNT and graphene-based polybutylene terephthalate (PBT) structures for electrical conductivity and mechanical stability.

Journal ArticleDOI
TL;DR: In this article, a complete and up-to-date overview of demand response (DR) enabling technologies, programs and consumer response types is presented, as well as the benefits and the drivers that have motivated the adoption of DR programs and the barriers that may hinder their further development.
Abstract: The increasing penetration of renewable energy sources (RES) in power systems intensifies the need of enhancing the flexibility in grid operations in order to accommodate the uncertain power output of the leading RES such as wind and solar generation. Utilities have been recently showing increasing interest in developing Demand Response (DR) programs in order to match generation and demand in a more efficient way. Incentive- and price-based DR programs aim at enabling the demand side in order to achieve a range of operational and economic advantages, towards developing a more sustainable power system structure. The contribution of the presented study is twofold. First, a complete and up-to-date overview of DR enabling technologies, programs and consumer response types is presented. Furthermore, the benefits and the drivers that have motivated the adoption of DR programs, as well as the barriers that may hinder their further development, are thoroughly discussed. Second, the international DR status quo is identified by extensively reviewing existing programs in different regions.

Book ChapterDOI
12 Jun 2017
TL;DR: In this paper, Long Short-Term Memory (LSTM) neural networks are used to predict the timestamp of the next event of a running case and the remaining time of the running case.
Abstract: Predictive business process monitoring methods exploit logs of completed cases of a process in order to make predictions about running cases thereof. Existing methods in this space are tailor-made for specific prediction tasks. Moreover, their relative accuracy is highly sensitive to the dataset at hand, thus requiring users to engage in trial-and-error and tuning when applying them in a specific setting. This paper investigates Long Short-Term Memory (LSTM) neural networks as an approach to build consistently accurate models for a wide range of predictive process monitoring tasks. First, we show that LSTMs outperform existing techniques to predict the next event of a running case and its timestamp. Next, we show how to use models for predicting the next task in order to predict the full continuation of a running case. Finally, we apply the same approach to predict the remaining time, and show that this approach outperforms existing tailor-made methods.

Journal ArticleDOI
TL;DR: A novel event-triggered control (ETC) strategy for a class of nonlinear feedback systems is proposed that can simultaneously guarantee a finite Lp-gain and a strictly positive lower bound on the inter-event times.
Abstract: Networked control systems are often subject to limited communication resources. By only communicating output measurements when needed, event-triggered control is an adequate method to reduce the usage of communication resources while retaining desired closed-loop performance. In this work, a novel event-triggered control (ETC) strategy for a class of nonlinear feedback systems is proposed that can simultaneously guarantee a finite $\mathcal{L}_{p}$ - gain and a strictly positive lower bound on the inter-event times. The new ETC scheme can be synthesized in an output-based and/or decentralized form, takes the specific medium access protocols into account, and is robust to (variable) transmission delays by design. Interestingly, in contrast with the majority of existing event-generators that only use static conditions, the newly proposed event-triggering conditions are based on dynamic elements, which has several advantages including larger average inter-event times. The developed theory leads to families of event-triggered controllers that correspond to different tradeoffs between (minimum and average) inter-event times, maximum allowable delays and $\mathcal{L}_{p}$ - gains. A linear and a nonlinear numerical example will illustrate all the benefits of this new dynamic ETC scheme.

Journal ArticleDOI
TL;DR: The ultimate goal here is to assemble a fully man-made cell that displays functionality and adaptivity as advanced as that found in nature, which will not only provide insight into the fundamental processes in natural cells but also pave the way for new applications of such artificial cells.
Abstract: ConspectusCells are highly advanced microreactors that form the basis of all life. Their fascinating complexity has inspired scientists to create analogs from synthetic and natural components using a bottom-up approach. The ultimate goal here is to assemble a fully man-made cell that displays functionality and adaptivity as advanced as that found in nature, which will not only provide insight into the fundamental processes in natural cells but also pave the way for new applications of such artificial cells.In this Account, we highlight our recent work and that of others on the construction of artificial cells. First, we will introduce the key features that characterize a living system; next, we will discuss how these have been imitated in artificial cells. First, compartmentalization is crucial to separate the inner chemical milieu from the external environment. Current state-of-the-art artificial cells comprise subcompartments to mimic the hierarchical architecture of eukaryotic cells and tissue. Further...

Journal ArticleDOI
TL;DR: A review of research reported in journal publications on CFD studies of urban microclimate till the end of 2015 suggests a possible change in this trend as the results from CFD simulations can be linked up with different aspects and thus, CFD can play an important role in transferring urban climate knowledge into engineering and design practice.
Abstract: Urban microclimate studies are gaining popularity due to rapid urbanization. Many studies documented that urban microclimate can affect building energy performance, human morbidity and mortality and thermal comfort. Historically, urban microclimate studies were conducted with observational methods such as field measurements. In the last decades, with the advances in computational resources, numerical simulation approaches have become increasingly popular. Nowadays, especially simulations with Computational Fluid Dynamics (CFD) is frequently used to assess urban microclimate. CFD can resolve the transfer of heat and mass and their interaction with individual obstacles such as buildings. Considering the rapid increase in CFD studies of urban microclimate, this paper provides a review of research reported in journal publications on this topic till the end of 2015. The studies are categorized based on the following characteristics: morphology of the urban area (generic versus real) and methodology (with or without validation study). In addition, the studies are categorized by specifying the considered urban settings/locations, simulation equations and models, target parameters and keywords. This review documents the increasing popularity of the research area over the years. Based on the data obtained concerning the urban location, target parameters and keywords, the historical development of the studies is discussed and future perspectives are provided. According to the results, early CFD microclimate studies were conducted for model development and later studies considered CFD approach as a predictive methodology. Later, with the established simulation setups, research efforts shifted to case studies. Recently, an increasing amount of studies focus on urban scale adaptation measures. The review hints a possible change in this trend as the results from CFD simulations can be linked up with different aspects (e.g. economy) and with different scales (e.g. buildings), and thus, CFD can play an important role in transferring urban climate knowledge into engineering and design practice.

Journal ArticleDOI
TL;DR: A state-of-the-art platform in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers is discussed that can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations.

Journal ArticleDOI
TL;DR: This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art.
Abstract: It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.

Journal ArticleDOI
TL;DR: Going beyond equilibrium polymerization is an exciting new direction in the field of supramolecular chemistry and the preparation protocol and mechanistic insights allow to identify each one of them.
Abstract: Supramolecular polymerization has been traditionally focused on the thermodynamic equilibrium state, where one-dimensional assemblies reside at the global minimum of the Gibbs free energy. The pathway and rate to reach the equilibrium state are irrelevant, and the resulting assemblies remain unchanged over time. In the past decade, the focus has shifted to kinetically trapped (non-dissipative non-equilibrium) structures that heavily depend on the method of preparation (i.e., pathway complexity), and where the assembly rates are of key importance. Kinetic models have greatly improved our understanding of competing pathways, and shown how to steer supramolecular polymerization in the desired direction (i.e., pathway selection). The most recent innovation in the field relies on energy or mass input that is dissipated to keep the system away from the thermodynamic equilibrium (or from other non-dissipative states). This tutorial review aims to provide the reader with a set of tools to identify different types of self-assembled states that have been explored so far. In particular, we aim to clarify the often unclear use of the term “non-equilibrium self-assembly” by subdividing systems into dissipative, and non-dissipative non-equilibrium states. Examples are given for each of the states, with a focus on non-dissipative non-equilibrium states found in one-dimensional supramolecular polymerization.

Journal ArticleDOI
TL;DR: A nano-opto-electro-mechanical system where the functionalities of transduction, actuation and detection are fully integrated, resulting in an ultra-compact high-resolution spectrometer with a micrometer-scale footprint is presented.
Abstract: Spectrometry is widely used for the characterization of materials, tissues, and gases, and the need for size and cost scaling is driving the development of mini and microspectrometers. While nanophotonic devices provide narrowband filtering that can be used for spectrometry, their practical application has been hampered by the difficulty of integrating tuning and read-out structures. Here, a nano-opto-electro-mechanical system is presented where the three functionalities of transduction, actuation, and detection are integrated, resulting in a high-resolution spectrometer with a micrometer-scale footprint. The system consists of an electromechanically tunable double-membrane photonic crystal cavity with an integrated quantum dot photodiode. Using this structure, we demonstrate a resonance modulation spectroscopy technique that provides subpicometer wavelength resolution. We show its application in the measurement of narrow gas absorption lines and in the interrogation of fiber Bragg gratings. We also explore its operation as displacement-to-photocurrent transducer, demonstrating optomechanical displacement sensing with integrated photocurrent read-out.

Journal ArticleDOI
TL;DR: The International Energy Agency (IEA) Energy in Buildings and Community (EBC) Programme Annex 66 has established a scientific methodological framework for occupant behavior research, including data collection, behavior model representation, modeling and evaluation approaches, and the integration of behavior modeling tools with building performance simulation programs as mentioned in this paper.

Journal ArticleDOI
TL;DR: It is shown that the proposed ETC scheme, if well designed, can tolerate a class of DoS signals characterized by frequency and duration properties without jeopardizing the stability, performance and Zeno-freeness of the ETC system.
Abstract: In this paper, we propose a systematic design framework for output-based dynamic event-triggered control (ETC) systems under denial-of-service (DoS) attacks. These malicious DoS attacks are intended to interfere with the communication channel causing periods in time at which transmission of measurement data is impossible. We show that the proposed ETC scheme, if well designed, can tolerate a class of DoS signals characterized by frequency and duration properties without jeopardizing the stability, performance and Zeno-freeness of the ETC system. In fact, the design procedure of the ETC condition allows tradeoffs between performance, robustness to DoS attacks, and utilization of communication resources. The main results will be illustrated by means of a numerical example.

Journal ArticleDOI
24 Aug 2017-Nature
TL;DR: A technique for generic bottom-up synthesis of complex quantum devices with a special focus on nanowire networks with a predefined number of superconducting islands is demonstrated, opening up new avenues for the realization of epitaxial three-dimensional quantum architectures which have the potential to become key components of various quantum devices.
Abstract: Semiconductor nanowires are ideal for realizing various low-dimensional quantum devices. In particular, topological phases of matter hosting non-Abelian quasiparticles (such as anyons) can emerge when a semiconductor nanowire with strong spin-orbit coupling is brought into contact with a superconductor. To exploit the potential of non-Abelian anyons - which are key elements of topological quantum computing - fully, they need to be exchanged in a well-controlled braiding operation. Essential hardware for braiding is a network of crystalline nanowires coupled to superconducting islands. Here we demonstrate a technique for generic bottom-up synthesis of complex quantum devices with a special focus on nanowire networks with a predefined number of superconducting islands. Structural analysis confirms the high crystalline quality of the nanowire junctions, as well as an epitaxial superconductor-semiconductor interface. Quantum transport measurements of nanowire 'hashtags' reveal Aharonov-Bohm and weak-antilocalization effects, indicating a phase-coherent system with strong spin-orbit coupling. In addition, a proximity-induced hard superconducting gap (with vanishing sub-gap conductance) is demonstrated in these hybrid superconductor-semiconductor nanowires, highlighting the successful materials development necessary for a first braiding experiment. Our approach opens up new avenues for the realization of epitaxial three-dimensional quantum architectures which have the potential to become key components of various quantum devices.

Journal ArticleDOI
31 Dec 2017
TL;DR: This paper formulate neuronal processing as belief propagation under deep generative models that can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture.
Abstract: This paper considers functional integration in the brain from a computational perspective. We ask what sort of neuronal message passing is mandated by active inference—and what implications this has for context-sensitive connectivity at microscopic and macroscopic levels. In particular, we formulate neuronal processing as belief propagation under deep generative models. Crucially, these models can entertain both discrete and continuous states, leading to distinct schemes for belief updating that play out on the same (neuronal) architecture. Technically, we use Forney (normal) factor graphs to elucidate the requisite message passing in terms of its form and scheduling. To accommodate mixed generative models (of discrete and continuous states), one also has to consider link nodes or factors that enable discrete and continuous representations to talk to each other. When mapping the implicit computational architecture onto neuronal connectivity, several interesting features emerge. For example, Bayesian model averaging and comparison, which link discrete and continuous states, may be implemented in thalamocortical loops. These and other considerations speak to a computational connectome that is inherently state dependent and self-organizing in ways that yield to a principled (variational) account. We conclude with simulations of reading that illustrate the implicit neuronal message passing, with a special focus on how discrete (semantic) representations inform, and are informed by, continuous (visual) sampling of the sensorium.

Journal ArticleDOI
TL;DR: The main focus is on the implications of dependencies on the structure of the optimal CBM policy, and a review of the advances made with respect to CBM is linked to practice by providing real-life examples, thereby stressing current gaps in the literature.

Journal ArticleDOI
TL;DR: This review shows the translation of one-dimensional supramolecular polymers into multi-component functional biomaterials for regenerative medicine applications.
Abstract: The most striking and general property of the biological fibrous architectures in the extracellular matrix (ECM) is the strong and directional interaction between biologically active protein subunits. These fibers display rich dynamic behavior without losing their architectural integrity. The complexity of the ECM taking care of many essential properties has inspired synthetic chemists to mimic these properties in artificial one-dimensional fibrous structures with the aim to arrive at multi-component biomaterials. Due to the dynamic character required for interaction with natural tissue, supramolecular biomaterials are promising candidates for regenerative medicine. Depending on the application area, and thereby the design criteria of these multi-component fibrous biomaterials, they are used as elastomeric materials or hydrogel systems. Elastomeric materials are designed to have load bearing properties whereas hydrogels are proposed to support in vitro cell culture. Although the chemical structures and systems designed and studied today are rather simple compared to the complexity of the ECM, the first examples of these functional supramolecular biomaterials reaching the clinic have been reported. The basic concept of many of these supramolecular biomaterials is based on their ability to adapt to cell behavior as a result of dynamic non-covalent interactions. In this review, we show the translation of one-dimensional supramolecular polymers into multi-component functional biomaterials for regenerative medicine applications.