Showing papers by "University of Paderborn published in 2019"
••
TL;DR: This paper provides an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.
Abstract: The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often referred to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular.
421 citations
••
TL;DR: A hybrid integrated quantum photonic system that is capable of entangling and disentangling two-photon spin states at a dielectric metasurface and providing a promising way to develop hybrid-integrated quantum technology operating in the high-dimensional mode space in various applications, such as imaging, sensing and computing.
Abstract: Optical metasurfaces open new avenues for the precise wavefront control of light for integrated quantum technology. Here, we demonstrate a hybrid integrated quantum photonic system that is capable of entangling and disentangling two-photon spin states at a dielectric metasurface. Via the interference of single-photon pairs at a nanostructured dielectric metasurface, a path-entangled two-photon NOON state with circular polarization that exhibits a quantum HOM interference visibility of 86 ± 4% is generated. Furthermore, we demonstrate nonclassicality andphase sensitivity in a metasurface-based interferometer with a fringe visibility of 86.8 ± 1.1% in the coincidence counts. This high visibility proves the metasurface-induced path entanglement inside the interferometer. Our findings provide a promising way to develop hybrid-integrated quantum technology operating in the high-dimensional mode space in various applications, such as imaging, sensing, and computing. Scientists have developed an optical metasurface capable of entangling and disentangling photon-pairs, providing a path for the development of quantum technologies for applications in computing, imaging, and sensing. Optical metasurfaces are sub-wavelength layers of nanostructures capable of precisely controlling the properties of light. They offer the promise of new miniaturized quantum systems where they remain largely unexplored. Now Thomas Zentgraf and colleagues from the University of Paderborn in Germany, working with researchers from the University of Stuttgart and the Southern University of Science and Technology in China, have developed a nanostructured dielectric metasurface capable of entangling and disentangling the spin states of a photon-pair. Quantum interference of the photons on the metasurface produces a circularly polarized entangled photon-pair, which can be disentangled by passing it through the metasurface a second time.
294 citations
••
TL;DR: This second release of i-PI not only includes several new advanced path integral methods, but also offers other classes of algorithms that are moving towards becoming a universal force engine that is both modular and tightly coupled to the driver codes that evaluate the potential energy surface and its derivatives.
238 citations
••
TL;DR: It is concluded that smart service systems are characterized by technology-mediated, continuous, and routinized interactions.
Abstract: Recent years have seen the emergence of physical products that are digitally networked with other products and with information systems to enable complex business scenarios in manufacturing, mobility, or healthcare. These “smart products”, which enable the co-creation of “smart service” that is based on monitoring, optimization, remote control, and autonomous adaptation of products, profoundly transform service systems into what we call “smart service systems”. In a multi-method study that includes conceptual research and qualitative data from in-depth interviews, we conceptualize “smart service” and “smart service systems” based on using smart products as boundary objects that integrate service consumers’ and service providers’ resources and activities. Smart products allow both actors to retrieve and to analyze aggregated field evidence and to adapt service systems based on contextual data. We discuss the implications that the introduction of smart service systems have for foundational concepts of service science and conclude that smart service systems are characterized by technology-mediated, continuous, and routinized interactions.
223 citations
••
TL;DR: In this article, a review of recent developments in the field of metasurface holography is presented, from the classification and design strategies for both free-space and surface waves.
Abstract: Holography has emerged as a vital approach to fully engineer the wavefronts of light since its invention dating back to the last century. However, the typically large pixel size, small field of view and limited space-bandwidth impose limitations in the on-demand high-performance applications, especially for three-dimensional displays and large-capacity data storage. Meanwhile, metasurfaces have shown great potential in controlling the propagation of light through the well-tailored scattering behavior of the constituent ultrathin planar elements with a high spatial resolution, making them suitable for holographic beam-shaping elements. Here, we review recent developments in the field of metasurface holography, from the classification of metasurfaces to the design strategies for both free-space and surface waves. By employing the concepts of holographic multiplexing, multiple information channels, such as wavelength, polarization state, spatial position and nonlinear frequency conversion, can be employed using metasurfaces. Meanwhile, the switchable metasurface holography by the integration of functional materials stimulates a gradual transition from passive to active elements. Importantly, the holography principle has become a universal and simple approach to solving inverse engineering problems for electromagnetic waves, thus allowing various related techniques to be achieved.
207 citations
••
01 Jan 2019
TL;DR: This chapter clarifies which research questions are appropriate for a grounded theory study and gives an overview of the main techniques and procedures, such as the coding procedures, theoretical sensitivity, theoretical sampling, and theoretical saturation.
Abstract: In this chapter we introduce grounded theory methodology and methods. In particular we clarify which research questions are appropriate for a grounded theory study and give an overview of the main techniques and procedures, such as the coding procedures, theoretical sensitivity, theoretical sampling, and theoretical saturation. We further discuss the role of theory within grounded theory and provide examples of studies in which the coding paradigm of grounded theory has been altered in order to be better suitable for applications in mathematics education. In our exposition we mainly refer to grounded theory techniques and procedures according to Strauss and Corbin (Basics of qualitative research: Grounded theory procedures and techniques, Sage Publications, Thousand Oaks, 1990), but also include other approaches in the discussion in order to point out the particularities of the approach by Strauss and Corbin.
165 citations
••
University of Paderborn1, St. Michael's Hospital2, Brown University3, University of Toronto4, Harvard University5, University of Parma6, Rovira i Virgili University7, Carlos III Health Institute8, Lund University9, Institute of Chartered Accountants of Nigeria10, University of Naples Federico II11, University of Milan12, University of Copenhagen13, University of Saskatchewan14, San Antonio River Authority15, University of Sydney16
TL;DR: Evidence indicates that GI and GL are substantial food markers predicting the development of T2D worldwide, for persons of European ancestry and of East Asian ancestry and that consideration should be given to these dietary risk factors in nutrition advice.
Abstract: Published meta-analyses indicate significant but inconsistent incident type-2 diabetes(T2D)-dietary glycemic index (GI) and glycemic load (GL) risk ratios or risk relations (RR). It is nowover a decade ago that a published meta-analysis used a predefined standard to identify validstudies. Considering valid studies only, and using random effects dose-response meta-analysis(DRM) while withdrawing spurious results (p 1.20 with a lower 95% confidence limit>1.10 across typical intakes (approximately 10th to 90th percentiles of population intakes). Thecombined T2D-GI RR was 1.27 (1.15-1.40) (p < 0.001, n = 10 studies) per 10 units GI, while that forthe T2D-GL RR was 1.26 (1.15-1.37) (p < 0.001, n = 15) per 80 g/d GL in a 2000 kcal (8400 kJ) diet.The corresponding global DRM using restricted cubic splines were 1.87 (1.56-2.25) (p < 0.001, n =10) and 1.89 (1.66-2.16) (p < 0.001, n = 15) from 47.6 to 76.1 units GI and 73 to 257 g/d GL in a 2000kcal diet, respectively. In conclusion, among adults initially in good health, diets higher in GI or GLwere robustly associated with incident T2D. Together with mechanistic and other data, thissupports that consideration should be given to these dietary risk factors in nutrition advice.Concerning the public health relevance at the global level, our evidence indicates that GI and GLare substantial food markers predicting the development of T2D worldwide, for persons ofEuropean ancestry and of East Asian ancestry.
143 citations
••
01 Jan 2019
TL;DR: Veins, an open-source model library for (and a toolbox around) OMNeT++, which supports researchers conducting simulations involving communicating road vehicles—either as the main focus of a study or as a component.
Abstract: We describe Veins, an open-source model library for (and a toolbox around) OMNeT++, which supports researchers conducting simulations involving communicating road vehicles—either as the main focus of a study or as a component. Veins already includes a full stack of simulation models for investigating cars and infrastructure communicating via IEEE 802.11 based technologies in simulations of Vehicular Ad Hoc Networks (VANETs) and Intelligent Transportation Systems (ITS). Thanks to its modularity, though, it can equally well be used as the basis for modeling other mobile nodes (like bikes or pedestrians) and communication technologies (from mobile broadband to visible light). Serving as the basis for hundreds of publications and university courses since its beginnings in the year 2006, today Veins is both one of the most mature and established tools in this domain.
143 citations
••
TL;DR: In this article, it was shown that for any sufficiently regular initial data (n0, c0, u0) satisfying n0 ≥ 0 and c 0 ≥ 0, the initial value problem for (⋆) under no-flux boundary conditions for n and c and homogeneous Dirichlet boundary condition for u possesses at least one globally defined solution in an appropriate generalized sense.
138 citations
••
TL;DR: In this paper, the authors derive an expression that relates the probability to measure a specific photon output pattern from a Gaussian state to the Hafnian matrix function and use it to design a new Gaussian boson sampling protocol.
Abstract: Since the development of boson sampling, there has been a quest to construct more efficient and experimentally feasible protocols to test the computational complexity of sampling from photonic states. In this paper, we interpret and extend the results presented previously [Phys. Rev. Lett. 119, 170501 (2017)]. We derive an expression that relates the probability to measure a specific photon output pattern from a Gaussian state to the Hafnian matrix function and use it to design a Gaussian boson sampling protocol. Then, we discuss the advantages that this protocol has relative to other photonic protocols and the experimental requirements for Gaussian boson sampling. Finally, we relate it to the previously most general protocol, scattershot boson sampling [Phys. Rev. Lett. 113, 100502 (2014)].
129 citations
••
TL;DR: A new framework for optimal and feedback control of PDEs using Koopman operator-based reduced order models (K-ROMs) is presented and it is shown that thevalue of the K-ROM based objective function converges in measure to the value of the full objective function.
••
TL;DR: The purpose of this article is to describe, in a way that is amenable to the nonspecialist, the key speech processing algorithms that enable reliable, fully hands-free speech interaction with digital home assistants.
Abstract: Once a popular theme of futuristic science fiction or far-fetched technology forecasts, digital home assistants with a spoken language interface have become a ubiquitous commodity today. This success has been made possible by major advancements in signal processing and machine learning for so-called far-field speech recognition, where the commands are spoken at a distance from the sound-capturing device. The challenges encountered are quite unique and different from many other use cases of automatic speech recognition (ASR). The purpose of this article is to describe, in a way that is amenable to the nonspecialist, the key speech processing algorithms that enable reliable, fully hands-free speech interaction with digital home assistants. These technologies include multichannel acoustic echo cancellation (MAEC), microphone array processing and dereverberation techniques for signal enhancement, reliable wake-up word and end-of-interaction detection, and high-quality speech synthesis as well as sophisticated statistical models for speech and language, learned from large amounts of heterogeneous training data. In all of these fields, deep learning (DL) has played a critical role.
••
University of Paderborn1, St. Michael's Hospital2, Brown University3, University of Toronto4, Harvard University5, University of Parma6, Rovira i Virgili University7, Carlos III Health Institute8, Lund University9, Institute of Chartered Accountants of Nigeria10, University of Naples Federico II11, University of Milan12, University of Copenhagen13, University of Saskatchewan14, San Antonio River Authority15, University of Sydney16
TL;DR: The high confidence in causal associations for incident T2D is sufficient to consider inclusion of GI and GL in food and nutrient-based recommendations, and the cost–benefit analysis suggests food and nutrition advice favors lower GI or GL and would produce significant potential cost savings in national healthcare budgets.
Abstract: While dietary factors are important modifiable risk factors for type 2 diabetes (T2D), the causal role of carbohydrate quality in nutrition remains controversial. Dietary glycemic index (GI) and glycemic load (GL) have been examined in relation to the risk of T2D in multiple prospective cohort studies. Previous meta-analyses indicate significant relations but consideration of causality has been minimal. Here, the results of our recent meta-analyses of prospective cohort studies of 4 to 26-y follow-up are interpreted in the context of the nine Bradford-Hill criteria for causality, that is: (1) Strength of Association, (2) Consistency, (3) Specificity, (4) Temporality, (5) Biological Gradient, (6) Plausibility, (7) Experimental evidence, (8) Analogy, and (9) Coherence. These criteria necessitated referral to a body of literature wider than prospective cohort studies alone, especially in criteria 6 to 9. In this analysis, all nine of the Hill's criteria were met for GI and GL indicating that we can be confident of a role for GI and GL as causal factors contributing to incident T2D. In addition, neither dietary fiber nor cereal fiber nor wholegrain were found to be reliable or effective surrogate measures of GI or GL. Finally, our cost-benefit analysis suggests food and nutrition advice favors lower GI or GL and would produce significant potential cost savings in national healthcare budgets. The high confidence in causal associations for incident T2D is sufficient to consider inclusion of GI and GL in food and nutrient-based recommendations.
••
TL;DR: The novel motor learning principles presented in this manuscript may optimize future rehabilitation programs to reduce second ACL injury risk and early development of osteoarthritis by targeting changes in neural networks.
Abstract: Athletes who wish to resume high-level activities after an injury to the anterior cruciate ligament (ACL) are often advised to undergo surgical reconstruction. Nevertheless, ACL reconstruction (ACLR) does not equate to normal function of the knee or reduced risk of subsequent injuries. In fact, recent evidence has shown that only around half of post-ACLR patients can expect to return to competitive level of sports. A rising concern is the high rate of second ACL injuries, particularly in young athletes, with up to 20% of those returning to sport in the first year from surgery experiencing a second ACL rupture. Aside from the increased risk of second injury, patients after ACLR have an increased risk of developing early onset of osteoarthritis. Given the recent findings, it is imperative that rehabilitation after ACLR is scrutinized so the second injury preventative strategies can be optimized. Unfortunately, current ACLR rehabilitation programs may not be optimally effective in addressing deficits related to the initial injury and the subsequent surgical intervention. Motor learning to (re-)acquire motor skills and neuroplastic capacities are not sufficiently incorporated during traditional rehabilitation, attesting to the high re-injury rates. The purpose of this article is to present novel clinically integrated motor learning principles to support neuroplasticity that can improve patient functional performance and reduce the risk of second ACL injury. The following key concepts to enhance rehabilitation and prepare the patient for re-integration to sports after an ACL injury that is as safe as possible are presented: (1) external focus of attention, (2) implicit learning, (3) differential learning, (4) self-controlled learning and contextual interference. The novel motor learning principles presented in this manuscript may optimize future rehabilitation programs to reduce second ACL injury risk and early development of osteoarthritis by targeting changes in neural networks.
••
TL;DR: In this paper, the authors integrate multiple polarization manipulation channels for various spatial phase profiles into a single birefringent vectorial hologram by completely avoiding unwanted cross-talk, and demonstrate high fidelity, large efficiency, broadband operation, and a total of twelve polarization channels.
Abstract: Since its invention, holography has emerged as a powerful tool to fully reconstruct the wavefronts of light including all the fundamental properties (amplitude, phase, polarization, wave vector, and frequency). For exploring the full capability for information storage/display and enhancing the encryption security of metasurface holograms, smart multiplexing techniques together with suitable metasurface designs are highly demanded. Here, we integrate multiple polarization manipulation channels for various spatial phase profiles into a single birefringent vectorial hologram by completely avoiding unwanted cross-talk. Multiple independent target phase profiles with quantified phase relations that can process significantly different information in different polarization states are realized within a single metasurface. For our metasurface holograms, we demonstrate high fidelity, large efficiency, broadband operation, and a total of twelve polarization channels. Such multichannel polarization multiplexing can be used for dynamic vectorial holographic display and can provide triple protection for optical security. The concept is appealing for applications of arbitrary spin to angular momentum conversion and various phase modulation/beam shaping elements.
••
04 Apr 2019TL;DR: In this paper, the authors discuss the current state of the art in the area of all-dielectric nonlinear nanostructures and metasurfaces, including the role of Mie modes, Fano resonances, and anapole moments for harmonic generation, wave mixing, and ultrafast optical switching.
Abstract: Free from phase-matching constraints, plasmonic metasurfaces have contributed significantly to the control of optical nonlinearity and enhancement of nonlinear generation efficiency by engineering subwavelength meta-atoms. However, high dissipative losses and inevitable thermal heating limit their applicability in nonlinear nanophotonics. All-dielectric metasurfaces, supporting both electric and magnetic Mie-type resonances in their nanostructures, have appeared as a promising alternative to nonlinear plasmonics. High-index dielectric nanostructures, allowing additional magnetic resonances, can induce magnetic nonlinear effects, which, along with electric nonlinearities, increase the nonlinear conversion efficiency. In addition, low dissipative losses and high damage thresholds provide an extra degree of freedom for operating at high pump intensities, resulting in a considerable enhancement of the nonlinear processes. We discuss the current state of the art in the intensely developing area of all-dielectric nonlinear nanostructures and metasurfaces, including the role of Mie modes, Fano resonances, and anapole moments for harmonic generation, wave mixing, and ultrafast optical switching. Furthermore, we review the recent progress in the nonlinear phase and wavefront control using all-dielectric metasurfaces. We discuss techniques to realize all-dielectric metasurfaces for multifunctional applications and generation of second-order nonlinear processes from complementary metal–oxide–semiconductor-compatible materials.
••
TL;DR: Popularity in these events has risen especially over the last 25 years with increasing participation notably in ultramarathon races where an exponential increase in participation has been observed.
Abstract: Ultra endurance events are defined as sporting activities lasting >6 hours and include events such as ultramarathon foot races, ultra triathlons, ultra distance swimming, ultra cycling, and cross-country skiing. Popularity in these events has risen especially over the last 25 years with increasing participation notably in ultramarathon races where an exponential increase in participation has been observed. This is in large part due to the increasing popularity and participation of women and master athletes in these events. Other endurance sports have seen similar increases but overall numbers are much lower compared with ultramarathon events.
••
TL;DR: A novel meta-device that integrates color printing and computer-generated holograms within a single-layer dielectric metasurface by modulating spectral and spatial responses at subwavelength scale, simultaneously is proposed and experimentally demonstrated.
Abstract: Metasurfaces possess the outstanding ability to tailor phase, amplitude, and even spectral responses of light with an unprecedented ultrahigh resolution and thus have attracted significant interest. Here, we propose and experimentally demonstrate a novel meta-device that integrates color printing and computer-generated holograms within a single-layer dielectric metasurface by modulating spectral and spatial responses at subwavelength scale, simultaneously. In our design, such metasurface appears as a microscopic color image under white light illumination, while encrypting two different holographic images that can be projected at the far-field when illuminated with red and green laser beams. We choose amorphous silicon dimers and nanofins as building components and use a modified parallel Gerchberg-Saxton algorithm to obtain multiple subholograms with arbitrary spatial shapes for image-indexed arrangements while avoiding the loss of phase information. Such a method can further extend the design freedom of metasurfaces. By exploiting spectral and spatial control at the level of individual pixels, multiple sets of independent information can be introduced into a single-layer device; the additional complexity and enlarged information capacity are promising for novel applications such as information security and anticounterfeiting.
••
TL;DR: In this paper, the authors evaluated the potential use of LNG for direct storage of cold and indirect storage of power, based on the review of existing information, and they showed that the overall efficiency of using LNG to operate energy storage depends very much on the technologies involved and on the overall capacity of a particular technology.
Abstract: The world trade volume of Liquefied Natural Gas (LNG) is increasing year by year. Unlike gaseous natural gas (NG), which is transported through a fixed network of pipelines, LNG offers more flexibility to both the exporters and the importers as it can be transported between any pair of exporting and receiving LNG terminals. The LNG process, consisting of liquefaction, transportation, storage, and regasification of LNG, is accompanied by certain energy demands. The paper focuses on the evaluation of the chain of energy transformations involved in the LNG process. Based on the review of existing information, the entire process is evaluated from the view of the potential use of LNG for direct storage of cold and indirect storage of power. The analysis of the existing data shows that the overall efficiency of using LNG for operative energy storage depends very much on the technologies involved and on the overall capacity of the particular technology. The combination of energy-efficient liquefaction technologies and regasification technologies with energy recovery makes it possible to employ LNG as an energy storage medium even when transported over large distances.
••
TL;DR: This rule change appeared to reduce the risk of head injuries in men’s professional football.
Abstract: Background Absolute numbers of head injuries in football (soccer) are considerable because of its high popularity and the large number of players. In 2006 a rule was changed to reduce head injuries. Players were given a red card (sent off) for intentional elbow-head contact. Aims To describe the head injury mechanism and examine the effect of the rule change. Methods Based on continuously recorded data from the German football magazine “kicker”, a database of all head injuries in the 1 st German Male Bundesliga was generated comprising seasons 2000/01-2012/13. Injury mechanisms were analysed from video recordings. Injury incidence rates (IR) and 95% confidence intervals (95% CI) as well as incidence rate ratios (IRR) to assess differences before and after the rule change were calculated. Results 356 head injuries were recorded (IR 2.22, 95% CI 2.00 to 2.46 per 1000 match hours). Contact with another player caused most head injuries, more specifically because of head-head (34%) or elbow-head (17%) contacts. After the rule change, head injuries were reduced by 29% (IRR 0.71, 95% CI 0.57 to 0.86, p=0.002). Lacerations/abrasions declined by 42% (95% CI 0.39 to 0.85), concussions by 29% (95% CI 0.46 to 1.09), contusions by 18% (95% CI 0.43 to 1.55) and facial fractures by 16% (95% CI 0.55 to 1.28). Conclusions This rule change appeared to reduce the risk of head injuries in men’s professional football.
••
12 May 2019TL;DR: In this paper, an all-neural approach to simultaneous speaker counting, diarization and source separation is presented, where the neural network is recurrent over time as well as over the number of sources.
Abstract: Automatic meeting analysis comprises the tasks of speaker counting, speaker diarization, and the separation of overlapped speech, followed by automatic speech recognition. This all has to be carried out on arbitrarily long sessions and, ideally, in an online or block-online manner. While significant progress has been made on individual tasks, this paper presents for the first time an all-neural approach to simultaneous speaker counting, diarization and source separation. The NN-based estimator operates in a block-online fashion and tracks speakers even if they remain silent for a number of time blocks, thus learning a stable output order for the separated sources. The neural network is recurrent over time as well as over the number of sources. The simulation experiments show that state of the art separation performance is achieved, while at the same time delivering good diarization and source counting results. It even generalizes well to an unseen large number of blocks.
••
TL;DR: In this article, a two-layer plasmonic metasurface design is proposed for non-reciprocal polarization encryption of holographic images, where the encoded hologram is designed to appear in a particular linear cross-polarization channel, while it is disappearing in the reverse propagation direction.
Abstract: As flexible optical devices that can manipulate the phase and amplitude of light, metasurfaces would clearly benefit from directional optical properties. However, single layer metasurface systems consisting of two-dimensional nanoparticle arrays exhibit only a weak spatial asymmetry perpendicular to the surface and therefore have mostly symmetric transmission features. Here, we present a metasurface design principle for nonreciprocal polarization encryption of holographic images. Our approach is based on a two-layer plasmonic metasurface design that introduces a local asymmetry and generates a bidirectional functionality with full phase and amplitude control of the transmitted light. The encoded hologram is designed to appear in a particular linear cross-polarization channel, while it is disappearing in the reverse propagation direction. Hence, layered metasurface systems can feature asymmetric transmission with full phase and amplitude control and therefore expand the design freedom in nanoscale optical devices toward asymmetric information processing and security features for anticounterfeiting applications.
••
TL;DR: This work focuses on an extension to the May--Nowak model for virus dynamics, additionally accounting for diffusion in all components and chemotactically directed motion of healthy cells in response to virus dynamics.
Abstract: This work focuses on an extension to the May--Nowak model for virus dynamics, additionally accounting for diffusion in all components and chemotactically directed motion of healthy cells in respons...
••
TL;DR: In this paper, the authors studied transmission scheduling for remote state estimation in the presence of an eavesdropper, where a sensor transmits local state estimates over a packet dropping link to a remote estimator.
Abstract: This paper studies transmission scheduling for remote state estimation in the presence of an eavesdropper. A sensor transmits local state estimates over a packet dropping link to a remote estimator, while an eavesdropper can successfully overhear each sensor transmission with a certain probability. The objective is to determine when the sensor should transmit, in order to minimize the estimation error covariance at the remote estimator, while trying to keep the eavesdropper error covariance above a certain level. This is done by solving an optimization problem that minimizes a linear combination of the expected estimation error covariance and the negative of the expected eavesdropper error covariance. Structural results on the optimal transmission policy are derived, and shown to exhibit thresholding behavior in the estimation error covariances. In the infinite horizon situation, it is shown that with unstable systems one can keep the expected estimation error covariance bounded while the expected eavesdropper error covariance becomes unbounded, for all eavesdropping probabilities strictly less than one.
••
TL;DR: The polarization-dependent wavefront control and the reconstruction of an encoded hologram at the third-harmonic wavelength with high fidelity are experimentally demonstrated and holographic multiplexing is possible by utilizing the polarization states of the third harmonic generation.
Abstract: Nonlinear wavefront control is a crucial requirement in realizing nonlinear optical applications with metasurfaces. Numerous aspects of nonlinear frequency conversion and wavefront control have been demonstrated for plasmonic metasurfaces. However, several disadvantages limit their applicability in nonlinear nanophotonics, including high dissipative loss and low optical damage threshold. In contrast, it has been shown that metasurfaces made of high-index dielectrics can provide strong nonlinear responses. Regardless of the recent progress in nonlinear optical processes using all-dielectric nanostructures and metasurfaces, much less advancement has been made in realizing a full wavefront control directly with the generation process. Here, we demonstrate the nonlinear wavefront control for the third-harmonic generation with a silicon metasurface. We use a Pancharatnam-Berry phase approach to encode phase gradients and holographic images on nanostructured silicon metasurfaces. We experimentally demonstrate the polarization-dependent wavefront control and the reconstruction of an encoded hologram at the third-harmonic wavelength with high fidelity. Further, we show that holographic multiplexing is possible by utilizing the polarization states of the third harmonic generation. Our approach eases design and fabrication processes and paves the way to an easy to use toolbox for nonlinear optical wavefront control with all-dielectric metasurfaces.
••
TL;DR: Training to resist fatigue is an underestimated aspect of prevention programs given that the presence of fatigue may play a crucial role in sustaining an ACL injury, and the question arises whether the same fatigue pathways are affected by the fatigue protocols used in the included laboratory studies as are experienced on the sports field.
Abstract: Causes of anterior cruciate ligament (ACL) injuries are multifactorial. Anterior cruciate ligament injury prevention should thus be approached from a multifactorial perspective as well. Training to resist fatigue is an underestimated aspect of prevention programs given that the presence of fatigue may play a crucial role in sustaining an ACL injury. The primary objective of this literature review was to summarize research findings relating to the kinematic and kinetic effects of fatigue on single-leg landing tasks through a systematic review and meta-analysis. Other objectives were to critically appraise current approaches to examine the effects of fatigue together with elucidating and proposing an optimized approach for measuring the role of fatigue in ACL injury prevention. A systematic literature search was conducted in the databases PubMed (1978–November 2017), CINAHL (1992–November 2017), and EMBASE (1973–November 2017). The inclusion criteria were: (1) full text, (2) published in English, German, or Dutch, (3) healthy subjects, (4) average age ≥ 18 years, (5) single-leg jump landing task, (6) evaluation of the kinematics and/or kinetics of the lower extremities before and after a fatigue protocol, and (7) presentation of numerical kinematic and/or kinetic data. Participants included healthy subjects who underwent a fatigue protocol and in whom the effects of pre- and post-fatigue on three-dimensional lower extremity kinematic and kinetics were compared. Methods of data collection, patient selection, blinding, prevention of verification bias, and study design were independently assessed. Twenty studies were included, in which four types of single-leg tasks were examined: the single-leg drop vertical jump, the single-leg drop landing, the single-leg hop for distance, and sidestep cutting. Fatigue seemed to mostly affect initial contact (decreased angles post-fatigue) and peak (increased angles post-fatigue) hip and knee flexion. Sagittal plane variables at initial contact were mostly affected under the single-leg hop for distance and sidestep cutting conditions whilst peak angles were affected during the single-leg drop jump. Training to resist fatigue is an underestimated aspect of prevention programs given that the presence of fatigue may play a crucial role in sustaining an ACL injury. Considering the small number of variables affected after fatigue, the question arises whether the same fatigue pathways are affected by the fatigue protocols used in the included laboratory studies as are experienced on the sports field.
••
15 Sep 2019TL;DR: This paper extensively investigated a combination of the guided source separation-based speech enhancement technique and an already proposed strong ASR backend and found that a tight combination of these techniques provided substantial accuracy improvements.
Abstract: In this paper, we present Hitachi and Paderborn University’s joint effort for automatic speech recognition (ASR) in a dinner party scenario. The main challenges of ASR systems for dinner party recordings obtained by multiple microphone arrays are (1) heavy speech overlaps, (2) severe noise and reverberation, (3) very natural onversational content, and possibly (4) insufficient training data. As an example of a dinner party scenario, we have chosen the data presented during the CHiME-5 speech recognition challenge, where the baseline ASR had a 73.3% word error rate (WER), and even the best performing system at the CHiME-5 challenge had a 46.1% WER. We extensively investigated a combination of the guided source separation-based speech enhancement technique and an already proposed strong ASR backend and found that a tight combination of these techniques provided substantial accuracy improvements. Our final system achieved WERs of 39.94% and 41.64% for the development and evaluation data, respectively, both of which are the best published results for the dataset. We also investigated with additional training data on the official small data in the CHiME-5 corpus to assess the intrinsic difficulty of this ASR task.
••
01 Feb 2019TL;DR: The concepts of networked control systems and the capabilities of current vehicular networking approaches are summarized and opportunities of Tactile Internet concepts that integrate interdisciplinary approaches from control theory, mechanical engineering, and communication protocol design are presented.
Abstract: The trend toward autonomous driving and the recent advances in vehicular networking led to a number of very successful proposals in cooperative driving Maneuvers can be coordinated among participating vehicles and controlled by means of wireless communications One of the most challenging scenarios or applications in this context is cooperative adaptive cruise control (CACC) or platooning When it comes to realizing safety gaps between the cars of less than 5 m, very strong requirements on the communication system need to be satisfied The underlying distributed control system needs regular updates of sensor information from the other cars in the order of about 10 Hz This leads to message rates in the order of up to 10 kHz for large networks, which, given the possibly unreliable wireless communication and the critical network congestion, is beyond the capabilities of current vehicular networking concepts In this paper, we summarize the concepts of networked control systems and revisit the capabilities of current vehicular networking approaches We then present opportunities of Tactile Internet concepts that integrate interdisciplinary approaches from control theory, mechanical engineering, and communication protocol design This way, it becomes possible to solve the high reliability and latency issues in this context
••
TL;DR: Density functional theory calculations support the unprecedented role of halides as active Lewis base components in the frustrated Lewis pair mediated hydrogen activation in theMetal-free reduction of carboxylic amides using oxalyl chloride as an activating agent and hydrogen as the final reductant.
Abstract: A method for the metal-free reduction of carboxylic amides using oxalyl chloride as an activating agent and hydrogen as the final reductant is introduced. The reaction proceeds via the hydrogen splitting by B(2,6-F2-C6H3)3 in combination with chloride as the Lewis base. Density functional theory calculations support the unprecedented role of halides as active Lewis base components in the frustrated Lewis pair mediated hydrogen activation. The reaction displays broad substrate scope for tertiary benzoic acid amides and α-branched carboxamides.
••
10 May 2019TL;DR: A training scheme to train neural network-based source separation algorithms from scratch when parallel clean data is unavailable is proposed and it is demonstrated that an unsupervised spatial clustering algorithm is sufficient to guide the training of a deep clustering system.
Abstract: We propose a training scheme to train neural network-based source separation algorithms from scratch when parallel clean data is unavailable. In particular, we demonstrate that an unsupervised spatial clustering algorithm is sufficient to guide the training of a deep clustering system. We argue that previous work on deep clustering requires strong supervision and elaborate on why this is a limitation. We demonstrate that (a) the single-channel deep clustering system trained according to the proposed scheme alone is able to achieve a similar performance as the multi-channel teacher in terms of word error rates and (b) initializing the spatial clustering approach with the deep clustering result yields a relative word error rate reduction of 26 % over the unsupervised teacher.