scispace - formally typeset
Search or ask a question

Showing papers by "University of Erlangen-Nuremberg published in 2016"


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: In the time-to-event analysis, the rate of the first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, orNonfatal stroke among patients with type 2 diabetes mellitus was lower with liraglutide than with placebo.
Abstract: BackgroundThe cardiovascular effect of liraglutide, a glucagon-like peptide 1 analogue, when added to standard care in patients with type 2 diabetes, remains unknown. MethodsIn this double-blind trial, we randomly assigned patients with type 2 diabetes and high cardiovascular risk to receive liraglutide or placebo. The primary composite outcome in the time-to-event analysis was the first occurrence of death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke. The primary hypothesis was that liraglutide would be noninferior to placebo with regard to the primary outcome, with a margin of 1.30 for the upper boundary of the 95% confidence interval of the hazard ratio. No adjustments for multiplicity were performed for the prespecified exploratory outcomes. ResultsA total of 9340 patients underwent randomization. The median follow-up was 3.8 years. The primary outcome occurred in significantly fewer patients in the liraglutide group (608 of 4668 patients [13.0%]) than in the placebo ...

4,409 citations


Journal ArticleDOI
TL;DR: This review discusses efforts to create next-generation materials via bottom-up organization of nanocrystals with preprogrammed functionality and self-assembly instructions, and explores the unique possibilities offered by leveraging nontraditional surface chemistries and assembly environments to control superlattice structure and produce nonbulk assemblies.
Abstract: Chemical methods developed over the past two decades enable preparation of colloidal nanocrystals with uniform size and shape. These Brownian objects readily order into superlattices. Recently, the range of accessible inorganic cores and tunable surface chemistries dramatically increased, expanding the set of nanocrystal arrangements experimentally attainable. In this review, we discuss efforts to create next-generation materials via bottom-up organization of nanocrystals with preprogrammed functionality and self-assembly instructions. This process is often driven by both interparticle interactions and the influence of the assembly environment. The introduction provides the reader with a practical overview of nanocrystal synthesis, self-assembly, and superlattice characterization. We then summarize the theory of nanocrystal interactions and examine fundamental principles governing nanocrystal self-assembly from hard and soft particle perspectives borrowed from the comparatively established fields of micro...

1,376 citations


Journal ArticleDOI
TL;DR: In the case of aircraft components, AM technology enables low-volume manufacturing, easy integration of design changes and, at least as importantly, piece part reductions to greatly simplify product assembly.
Abstract: The past few decades have seen substantial growth in Additive Manufacturing (AM) technologies. However, this growth has mainly been process-driven. The evolution of engineering design to take advantage of the possibilities afforded by AM and to manage the constraints associated with the technology has lagged behind. This paper presents the major opportunities, constraints, and economic considerations for Design for Additive Manufacturing. It explores issues related to design and redesign for direct and indirect AM production. It also highlights key industrial applications, outlines future challenges, and identifies promising directions for research and the exploitation of AM's full potential in industry.

1,132 citations


Journal ArticleDOI
TL;DR: A new non-fullerene acceptor that has been specifically designed to give improved performance alongside the wide bandgap donor poly(3-hexylthiophene), a polymer with significantly better prospects for commercial OPV due to its relative scalability and stability is presented.
Abstract: Solution-processed organic photovoltaics (OPV) offer the attractive prospect of low-cost, light-weight and environmentally benign solar energy production. The highest efficiency OPV at present use low-bandgap donor polymers, many of which suffer from problems with stability and synthetic scalability. They also rely on fullerene-based acceptors, which themselves have issues with cost, stability and limited spectral absorption. Here we present a new non-fullerene acceptor that has been specifically designed to give improved performance alongside the wide bandgap donor poly(3-hexylthiophene), a polymer with significantly better prospects for commercial OPV due to its relative scalability and stability. Thanks to the well-matched optoelectronic and morphological properties of these materials, efficiencies of 6.4% are achieved which is the highest reported for fullerene-free P3HT devices. In addition, dramatically improved air stability is demonstrated relative to other high-efficiency OPV, showing the excellent potential of this new material combination for future technological applications.

1,022 citations


Proceedings ArticleDOI
27 Jun 2016
TL;DR: A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
Abstract: We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.

1,011 citations


Journal ArticleDOI
TL;DR: It is shown that LDHA-associated lactic acid accumulation in melanomas inhibits tumor surveillance by T and NK cells, and is a potent inhibitor of function and survival of T andNK cells leading to tumor immune escape.

948 citations


Journal ArticleDOI
TL;DR: These recommendations provide stakeholders with an updated consensus on the pharmacological treatment of PsA and strategies to reach optimal outcomes in PsA, based on a combination of evidence and expert opinion.
Abstract: Background Since the publication of the European League Against Rheumatism recommendations for the pharmacological treatment of psoriatic arthritis (PsA) in 2012, new evidence and new therapeutic agents have emerged. The objective was to update these recommendations. Methods A systematic literature review was performed regarding pharmacological treatment in PsA. Subsequently, recommendations were formulated based on the evidence and the expert opinion of the 34 Task Force members. Levels of evidence and strengths of recommendations were allocated. Results The updated recommendations comprise 5 overarching principles and 10 recommendations, covering pharmacological therapies for PsA from non-steroidal anti-inflammatory drugs (NSAIDs), to conventional synthetic (csDMARD) and biological (bDMARD) disease-modifying antirheumatic drugs, whatever their mode of action, taking articular and extra-articular manifestations of PsA into account, but focusing on musculoskeletal involvement. The overarching principles address the need for shared decision-making and treatment objectives. The recommendations address csDMARDs as an initial therapy after failure of NSAIDs and local therapy for active disease, followed, if necessary, by a bDMARD or a targeted synthetic DMARD (tsDMARD). The first bDMARD would usually be a tumour necrosis factor (TNF) inhibitor. bDMARDs targeting interleukin (IL)12/23 (ustekinumab) or IL-17 pathways (secukinumab) may be used in patients for whom TNF inhibitors are inappropriate and a tsDMARD such as a phosphodiesterase 4-inhibitor (apremilast) if bDMARDs are inappropriate. If the first bDMARD strategy fails, any other bDMARD or tsDMARD may be used. Conclusions These recommendations provide stakeholders with an updated consensus on the pharmacological treatment of PsA and strategies to reach optimal outcomes in PsA, based on a combination of evidence and expert opinion.

802 citations


Journal ArticleDOI
TL;DR: The recommendations of the present document represent the best clinical wisdom upon which physicians, nurses and families should base their decisions and should encourage public policy makers to develop a global effort to improve identification and treatment of high blood pressure among children and adolescents.
Abstract: Increasing prevalence of hypertension (HTN) in children and adolescents has become a significant public health issue driving a considerable amount of research. Aspects discussed in this document include advances in the definition of HTN in 16 year or older, clinical significance of isolated systolic HTN in youth, the importance of out of office and central blood pressure measurement, new risk factors for HTN, methods to assess vascular phenotypes, clustering of cardiovascular risk factors and treatment strategies among others. The recommendations of the present document synthesize a considerable amount of scientific data and clinical experience and represent the best clinical wisdom upon which physicians, nurses and families should base their decisions. In addition, as they call attention to the burden of HTN in children and adolescents, and its contribution to the current epidemic of cardiovascular disease, these guidelines should encourage public policy makers to develop a global effort to improve identification and treatment of high blood pressure among children and adolescents.

795 citations


Posted ContentDOI
23 Feb 2016-bioRxiv
TL;DR: A collaborative effort in which a centralized analysis pipeline is applied to a SCZ cohort, finding support at a suggestive level for nine additional candidate susceptibility and protective loci, which consist predominantly of CNVs mediated by non-allelic homologous recombination (NAHR).
Abstract: Genomic copy number variants (CNVs) have been strongly implicated in the etiology of schizophrenia (SCZ). However, apart from a small number of risk variants, elucidation of the CNV contribution to risk has been difficult due to the rarity of risk alleles, all occurring in less than 1% of cases. We sought to address this obstacle through a collaborative effort in which we applied a centralized analysis pipeline to a SCZ cohort of 21,094 cases and 20,227 controls. We observed a global enrichment of CNV burden in cases (OR=1.11, P=5.7e-15), which persisted after excluding loci implicated in previous studies (OR=1.07, P=1.7e-6). CNV burden is also enriched for genes associated with synaptic function (OR = 1.68, P = 2.8e-11) and neurobehavioral phenotypes in mouse (OR = 1.18, P= 7.3e-5). We identified genome-wide significant support for eight loci, including 1q21.1, 2p16.3 (NRXN1), 3q29, 7q11.2, 15q13.3, distal 16p11.2, proximal 16p11.2 and 22q11.2. We find support at a suggestive level for nine additional candidate susceptibility and protective loci, which consist predominantly of CNVs mediated by non-allelic homologous recombination (NAHR).

764 citations


Journal ArticleDOI
S. Adrián-Martínez1, M. Ageron2, Felix Aharonian3, Sebastiano Aiello  +243 moreInstitutions (24)
TL;DR: In this article, the main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos.
Abstract: The main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: (1) the high-energy astrophysical neutrino signal reported by IceCube and (2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergistic opportunities for the Earth and sea sciences community. Three suitable deep-sea sites are selected, namely off-shore Toulon (France), Capo Passero (Sicily, Italy) and Pylos (Peloponnese, Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a three-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be sparsely configured to fully explore the IceCube signal with similar instrumented volume, different methodology, improved resolution and complementary field of view, including the galactic plane. One building block will be densely configured to precisely measure atmospheric neutrino oscillations.

Journal ArticleDOI
08 Sep 2016-Nature
TL;DR: PZM21 is a potent Gi activator with exceptional selectivity for μOR and minimal β-arrestin-2 recruitment and is devoid of both respiratory depression and morphine-like reinforcing activity in mice at equi-analgesic doses.
Abstract: Morphine is an alkaloid from the opium poppy used to treat pain. The potentially lethal side effects of morphine and related opioids-which include fatal respiratory depression-are thought to be mediated by μ-opioid-receptor (μOR) signalling through the β-arrestin pathway or by actions at other receptors. Conversely, G-protein μOR signalling is thought to confer analgesia. Here we computationally dock over 3 million molecules against the μOR structure and identify new scaffolds unrelated to known opioids. Structure-based optimization yields PZM21-a potent Gi activator with exceptional selectivity for μOR and minimal β-arrestin-2 recruitment. Unlike morphine, PZM21 is more efficacious for the affective component of analgesia versus the reflexive component and is devoid of both respiratory depression and morphine-like reinforcing activity in mice at equi-analgesic doses. PZM21 thus serves as both a probe to disentangle μOR signalling and a therapeutic lead that is devoid of many of the side effects of current opioids.

Journal ArticleDOI
Liisa M. Pelttari1, Sofia Khan1, Mikko Vuorela2, Johanna I. Kiiski1, Sara Vilske1, Viivi Nevanlinna1, Salla Ranta1, Johanna Schleutker3, Johanna Schleutker4, Johanna Schleutker5, Robert Winqvist2, Anne Kallioniemi4, Thilo Dörk6, Natalia Bogdanova6, Jonine Figueroa, Paul D.P. Pharoah7, Marjanka K. Schmidt8, Alison M. Dunning7, Montserrat Garcia-Closas9, Manjeet K. Bolla7, Joe Dennis7, Kyriaki Michailidou7, Qin Wang7, John L. Hopper10, Melissa C. Southey10, Efraim H. Rosenberg8, Peter A. Fasching11, Peter A. Fasching12, Matthias W. Beckmann11, Julian Peto13, Isabel dos-Santos-Silva13, Elinor J. Sawyer14, Ian Tomlinson15, Barbara Burwinkel16, Barbara Burwinkel17, Harald Surowy17, Harald Surowy16, Pascal Guénel18, Thérèse Truong18, Stig E. Bojesen19, Stig E. Bojesen20, Børge G. Nordestgaard20, Børge G. Nordestgaard19, Javier Benitez, Anna González-Neira, Susan L. Neuhausen21, Hoda Anton-Culver22, Hermann Brenner16, Volker Arndt16, Alfons Meindl23, Rita K. Schmutzler24, Hiltrud Brauch16, Hiltrud Brauch25, Hiltrud Brauch26, Thomas Brüning27, Annika Lindblom28, Sara Margolin28, Arto Mannermaa29, Jaana M. Hartikainen29, Georgia Chenevix-Trench30, kConFab10, kConFab30, Aocs Investigators31, Laurien Van Dyck31, Hilde Janssen32, Hilde Janssen16, Jenny Chang-Claude16, Anja Rudolph, Paolo Radice, Paolo Peterlongo33, Emily Hallberg33, Janet E. Olson10, Janet E. Olson34, Graham G. Giles34, Graham G. Giles10, Roger L. Milne35, Christopher A. Haiman35, Fredrick Schumacher36, Jacques Simard36, Martine Dumont37, Martine Dumont38, Vessela N. Kristensen38, Vessela N. Kristensen37, Anne Lise Børresen-Dale39, Wei Zheng39, Alicia Beeghly-Fadiel40, Mervi Grip41, Mervi Grip42, Irene L. Andrulis42, Gord Glendon43, Peter Devilee44, Caroline Seynaeve44, Maartje J. Hooning45, Margriet Collée46, Angela Cox46, Simon S. Cross7, Mitul Shah7, Robert Luben16, Ute Hamann47, Ute Hamann16, Diana Torres48, Anna Jakubowska48, Jan Lubinski33, Fergus J. Couch, Drakoulis Yannoukakos9, Nick Orr9, Anthony J. Swerdlow28, Hatef Darabi28, Jingmei Li28, Kamila Czene28, Per Hall7, Douglas F. Easton1, Johanna Mattson1, Carl Blomqvist1, Kristiina Aittomäki1, Heli Nevanlinna 
05 May 2016-PLOS ONE
TL;DR: It is suggested that loss-of-function mutations in RAD 51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk.
Abstract: Common variation on 14q24.1, close to RAD51B, has been associated with breast cancer: rs999737 and rs2588809 with the risk of female breast cancer and rs1314913 with the risk of male breast cancer. The aim of this study was to investigate the role of RAD51B variants in breast cancer predisposition, particularly in the context of familial breast cancer in Finland. We sequenced the coding region of RAD51B in 168 Finnish breast cancer patients from the Helsinki region for identification of possible recurrent founder mutations. In addition, we studied the known rs999737, rs2588809, and rs1314913 SNPs and RAD51B haplotypes in 44,791 breast cancer cases and 43,583 controls from 40 studies participating in the Breast Cancer Association Consortium (BCAC) that were genotyped on a custom chip (iCOGS). We identified one putatively pathogenic missense mutation c.541C>T among the Finnish cancer patients and subsequently genotyped the mutation in additional breast cancer cases (n = 5259) and population controls (n = 3586) from Finland and Belarus. No significant association with breast cancer risk was seen in the meta-analysis of the Finnish datasets or in the large BCAC dataset. The association with previously identified risk variants rs999737, rs2588809, and rs1314913 was replicated among all breast cancer cases and also among familial cases in the BCAC dataset. The most significant association was observed for the haplotype carrying the risk-alleles of all the three SNPs both among all cases (odds ratio (OR): 1.15, 95% confidence interval (CI): 1.11-1.19, P = 8.88 x 10-16) and among familial cases (OR: 1.24, 95% CI: 1.16-1.32, P = 6.19 x 10-11), compared to the haplotype with the respective protective alleles. Our results suggest that loss-of-function mutations in RAD51B are rare, but common variation at the RAD51B region is significantly associated with familial breast cancer risk.

Journal ArticleDOI
TL;DR: An international formal consensus of MG experts intended to be a guide for clinicians caring for patients with MG worldwide is developed.
Abstract: Objective: To develop formal consensus-based guidance for the management of myasthenia gravis (MG). Methods: In October 2013, the Myasthenia Gravis Foundation of America appointed a Task Force to develop treatment guidance for MG, and a panel of 15 international experts was convened. The RAND/UCLA appropriateness methodology was used to develop consensus guidance statements. Definitions were developed for goals of treatment, minimal manifestations, remission, ocular MG, impending crisis, crisis, and refractory MG. An in-person panel meeting then determined 7 treatment topics to be addressed. Initial guidance statements were developed from literature summaries. Three rounds of anonymous e-mail votes were used to attain consensus on guidance statements modified on the basis of panel input. Results: Guidance statements were developed for symptomatic and immunosuppressive treatments, IV immunoglobulin and plasma exchange, management of impending and manifest myasthenic crisis, thymectomy, juvenile MG, MG associated with antibodies to muscle-specific tyrosine kinase, and MG in pregnancy. Conclusion: This is an international formal consensus of MG experts intended to be a guide for clinicians caring for patients with MG worldwide.

Journal ArticleDOI
15 Apr 2016-Science
TL;DR: The experimental realization of a single-atom heat engine is reported, demonstrating that thermal machines can be reduced to the limit of single atoms.
Abstract: Heat engines convert thermal energy into mechanical work and generally involve a large number of particles. We report the experimental realization of a single-atom heat engine. An ion is confined in a linear Paul trap with tapered geometry and driven thermally by coupling it alternately to hot and cold reservoirs. The output power of the engine is used to drive a harmonic oscillation. From direct measurements of the ion dynamics, we were able to determine the thermodynamic cycles for various temperature differences of the reservoirs. We then used these cycles to evaluate the power P and efficiency η of the engine, obtaining values up to P = 3.4 × 10(-22)joules per second and η = 0.28%, consistent with analytical estimations. Our results demonstrate that thermal machines can be reduced to the limit of single atoms.

Journal ArticleDOI
TL;DR: In this article, a review of the relationship between process characteristics, material consolidation and the resulting materials and component properties is presented, with a special focus on the relationship of process characteristics and material consolidation.
Abstract: Selective electron beam melting (SEBM) belongs to the additive manufacturing technologies which are believed to revolutionise future industrial production. Starting from computer-aided designed data, components are built layer by layer within a powder bed by selectively melting the powder with a high power electron beam. In contrast to selective laser melting (SLM), which can be used for metals, polymers and ceramics, the application field of the electron beam is restricted to metallic components since electric conductivity is required. On the other hand, the electron beam works under vacuum conditions, can be moved at extremely high velocities and a high beam power is available. These features make SEBM especially interesting for the processing of high-performance alloys. The present review describes SEBM with special focus on the relationship between process characteristics, material consolidation and the resulting materials and component properties.

Journal ArticleDOI
TL;DR: It is reported that a sulfur-doped zeolite-templated carbon, simultaneously exhibiting large sulfur content, as well as a unique carbon structure (that is, highly curved three-dimensional networks of graphene nanoribbons), can stabilize a relatively high loading of platinum in the form of highly dispersed species including site isolated atoms.
Abstract: Maximum atom efficiency as well as distinct chemoselectivity is expected for electrocatalysis on atomically dispersed (or single site) metal centres, but its realization remains challenging so far, because carbon, as the most widely used electrocatalyst support, cannot effectively stabilize them. Here we report that a sulfur-doped zeolite-templated carbon, simultaneously exhibiting large sulfur content (17 wt% S), as well as a unique carbon structure (that is, highly curved three-dimensional networks of graphene nanoribbons), can stabilize a relatively high loading of platinum (5 wt%) in the form of highly dispersed species including site isolated atoms. In the oxygen reduction reaction, this catalyst does not follow a conventional four-electron pathway producing H2O, but selectively produces H2O2 even over extended times without significant degradation of the activity. Thus, this approach constitutes a potentially promising route for producing important fine chemical H2O2, and also offers opportunities for tuning the selectivity of other electrochemical reactions on various metal catalysts.

Proceedings ArticleDOI
24 Jul 2016
TL;DR: A novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video) that addresses the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling and re-render the manipulated output video in a photo-realistic fashion.
Abstract: We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reen-acted in real time.

Journal ArticleDOI
TL;DR: A novel MIMO-NOMA framework for downlink and uplink transmission is proposed by applying the concept of signal alignment and closed-form analytical results are developed to facilitate the performance evaluation of the proposed framework for randomly deployed users and interferers.
Abstract: The application of multiple-input multiple-output (MIMO) techniques to nonorthogonal multiple access (NOMA) systems is important to enhance the performance gains of NOMA. In this paper, a novel MIMO-NOMA framework for downlink and uplink transmission is proposed by applying the concept of signal alignment. By using stochastic geometry, closed-form analytical results are developed to facilitate the performance evaluation of the proposed framework for randomly deployed users and interferers. The impact of different power allocation strategies, namely fixed power allocation and cognitive radio inspired power allocation, on the performance of MIMO-NOMA is also investigated. Computer simulation results are provided to demonstrate the performance of the proposed framework and the accuracy of the developed analytical results.

Journal ArticleDOI
01 Sep 2016-Nature
TL;DR: It is demonstrated that extremely high repetition rates, which make ablation cooling possible, reduce the laser pulse energies needed for ablation and increase the efficiency of the removal process by an order of magnitude over previously used laser parameters.
Abstract: The use of femtosecond laser pulses allows precise and thermal-damage-free removal of material (ablation) with wide-ranging scientific, medical and industrial applications. However, its potential is limited by the low speeds at which material can be removed and the complexity of the associated laser technology. The complexity of the laser design arises from the need to overcome the high pulse energy threshold for efficient ablation. However, the use of more powerful lasers to increase the ablation rate results in unwanted effects such as shielding, saturation and collateral damage from heat accumulation at higher laser powers. Here we circumvent this limitation by exploiting ablation cooling, in analogy to a technique routinely used in aerospace engineering. We apply ultrafast successions (bursts) of laser pulses to ablate the target material before the residual heat deposited by previous pulses diffuses away from the processing region. Proof-of-principle experiments on various substrates demonstrate that extremely high repetition rates, which make ablation cooling possible, reduce the laser pulse energies needed for ablation and increase the efficiency of the removal process by an order of magnitude over previously used laser parameters. We also demonstrate the removal of brain tissue at two cubic millimetres per minute and dentine at three cubic millimetres per minute without any thermal damage to the bulk.



Journal ArticleDOI
TL;DR: Results showed no difference in survival in patients treated with complete lymph node dissection compared with observation only, and complete lymph nodes dissection should not be recommended in patients with melanoma with lymph node micrometastases of at least a diameter of 1 mm or smaller.
Abstract: Summary Background Complete lymph node dissection is recommended in patients with positive sentinel lymph node biopsy results. To date, the effect of complete lymph node dissection on prognosis is controversial. In the DeCOG-SLT trial, we assessed whether complete lymph node dissection resulted in increased survival compared with observation. Methods In this multicentre, randomised, phase 3 trial, we enrolled patients with cutaneous melanoma of the torso, arms, or legs from 41 German skin cancer centres. Patients with positive sentinel lymph node biopsy results were eligible. Patients were randomly assigned (1:1) to undergo complete lymph node dissection or observation with permuted blocks of variable size and stratified by primary tumour thickness, ulceration of primary tumour, and intended adjuvant interferon therapy. Treatment assignment was not masked. The primary endpoint was distant metastasis-free survival and analysed by intention to treat. All patients in the intention-to-treat population of the complete lymph node dissection group were included in the safety analysis. This trial is registered with ClinicalTrials.gov, number NCT02434107. Follow-up is ongoing, but the trial no longer recruiting patients. Findings Between Jan 1, 2006, and Dec 1, 2014, 5547 patients were screened with sentinel lymph node biopsy and 1269 (23%) patients were positive for micrometastasis. Of these, 483 (39%) agreed to randomisation into the clinical trial; due to difficulties enrolling and a low event rate the trial closed early on Dec 1, 2014. 241 patients were randomly assigned to the observation group and 242 to the complete lymph node dissection group. Ten patients did not meet the inclusion criteria, so 233 patients were analysed in the observation group and 240 patients were analysed in the complete lymph node dissection group, as the intention-to-treat population. 311 (66%) patients (158 in the observation group and 153 in the dissection group) had sentinel lymph node metastases of 1 mm or less. Median follow-up was 35 months (IQR 20–54). Distant metastasis-free survival at 3 years was 77·0% (90% CI 71·9–82·1; 55 events) in the observation group and 74·9% (69·5–80·3; 54 events) in the complete lymph node dissection group. In the complete lymph node dissection group, grade 3 and 4 events occurred in 15 patients (6%) and 19 patients (8%) patients, respectively. Adverse events included lymph oedema (grade 3 in seven patients, grade 4 in 13 patients), lymph fistula (grade 3 in one patient, grade 4 in two patients), seroma (grade 3 in three patients, no grade 4), infection (grade 3 in three patients, no grade 4), and delayed wound healing (grade 3 in one patient, grade 4 in four patients); no serious adverse events were reported. Interpretation Although we did not achieve the required number of events, leading to the trial being underpowered, our results showed no difference in survival in patients treated with complete lymph node dissection compared with observation only. Consequently, complete lymph node dissection should not be recommended in patients with melanoma with lymph node micrometastases of at least a diameter of 1 mm or smaller. Funding German Cancer Aid.

Journal ArticleDOI
M. G. Aartsen1, K. Abraham2, Markus Ackermann, Jenni Adams3  +313 moreInstitutions (49)
TL;DR: In this paper, an isotropic, unbroken power-law flux with a normalization at 100 TeV neutrino energy of (0.90 -0.27 +0.30) × 10-18 Gev-1 cm-2 s-1 sr-1 and a hard spectral index of γ = 2.13 ± 0.13.
Abstract: The IceCube Collaboration has previously discovered a high-energy astrophysical neutrino flux using neutrino events with interaction vertices contained within the instrumented volume of the IceCube detector. We present a complementary measurement using charged current muon neutrino events where the interaction vertex can be outside this volume. As a consequence of the large muon range the effective area is significantly larger but the field of view is restricted to the Northern Hemisphere. IceCube data from 2009 through 2015 have been analyzed using a likelihood approach based on the reconstructed muon energy and zenith angle. At the highest neutrino energies between 194 TeV and 7.8 PeV a significant astrophysical contribution is observed, excluding a purely atmospheric origin of these events at 5.6s significance. The data are well described by an isotropic, unbroken power-law flux with a normalization at 100 TeV neutrino energy of (0.90 -0.27 +0.30) × 10-18 Gev-1 cm-2 s-1 sr-1and a hard spectral index of γ = 2.13 ± 0.13. The observed spectrum is harder in comparison to previous IceCube analyses with lower energy thresholds which may indicate a break in the astrophysical neutrino spectrum of unknown origin. The highest-energy event observed has a reconstructed muon energy of (4.5 ± 1.2) PeV which implies a probability of less than 0.005% for this event to be of atmospheric origin. Analyzing the arrival directions of all events with reconstructed muon energies above 200 TeV no correlation with known γ-ray sources was found. Using the high statistics of atmospheric neutrinos we report the current best constraints on a prompt atmospheric muon neutrino flux originating from charmed meson decays which is below 1.06 in units of the flux normalization of the model in Enberg et al.

Journal ArticleDOI
TL;DR: A taxonomy is introduced as a framework for systematically studying the existing user association algorithms conceived for HetNets, massive MIMO, mmWave, and energy harvesting networks and provides design guidelines and potential solutions for sophisticated user association mechanisms.
Abstract: The fifth generation (5G) mobile networks are envisioned to support the deluge of data traffic with reduced energy consumption and improved quality of service (QoS) provision. To this end, key enabling technologies, such as heterogeneous networks (HetNets), massive multiple-input multiple-output (MIMO), and millimeter wave (mmWave) techniques, have been identified to bring 5G to fruition. Regardless of the technology adopted, a user association mechanism is needed to determine whether a user is associated with a particular base station (BS) before data transmission commences. User association plays a pivotal role in enhancing the load balancing, the spectrum efficiency, and the energy efficiency of networks. The emerging 5G networks introduce numerous challenges and opportunities for the design of sophisticated user association mechanisms. Hence, substantial research efforts are dedicated to the issues of user association in HetNets, massive MIMO networks, mmWave networks, and energy harvesting networks. We introduce a taxonomy as a framework for systematically studying the existing user association algorithms. Based on the proposed taxonomy, we then proceed to present an extensive overview of the state-of-the-art in user association algorithms conceived for HetNets, massive MIMO, mmWave, and energy harvesting networks. Finally, we summarize the challenges as well as opportunities of user association in 5G and provide design guidelines and potential solutions for sophisticated user association mechanisms.

Journal ArticleDOI
TL;DR: Machine learning combining clinical and CCTA data was found to predict 5-year all-cause mortality significantly better than existing clinical or C CTA metrics alone.
Abstract: Aims Traditional prognostic risk assessment in patients undergoing non-invasive imaging is based upon a limited selection of clinical and imaging findings Machine learning (ML) can consider a greater number and complexity of variables Therefore, we investigated the feasibility and accuracy of ML to predict 5-year all-cause mortality (ACM) in patients undergoing coronary computed tomographic angiography (CCTA), and compared the performance to existing clinical or CCTA metrics Methods and results The analysis included 10 030 patients with suspected coronary artery disease and 5-year follow-up from the COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter registry All patients underwent CCTA as their standard of care Twenty-five clinical and 44 CCTA parameters were evaluated, including segment stenosis score (SSS), segment involvement score (SIS), modified Duke index (DI), number of segments with non-calcified, mixed or calcified plaques, age, sex, gender, standard cardiovascular risk factors, and Framingham risk score (FRS) Machine learning involved automated feature selection by information gain ranking, model building with a boosted ensemble algorithm, and 10-fold stratified cross-validation Seven hundred and forty-five patients died during 5-year follow-up Machine learning exhibited a higher area-under-curve compared with the FRS or CCTA severity scores alone (SSS, SIS, DI) for predicting all-cause mortality (ML: 079 vs FRS: 061, SSS: 064, SIS: 064, DI: 062; P < 0001) Conclusions Machine learning combining clinical and CCTA data was found to predict 5-year ACM significantly better than existing clinical or CCTA metrics alone

Journal ArticleDOI
TL;DR: Non-fullerene acceptors with optimized energy levels enable 10% efficient solar cells with reduced voltage losses <0.6 V.
Abstract: Optimization of the energy levels at the donor–acceptor interface of organic solar cells has driven their efficiencies to above 10%. However, further improvements towards efficiencies comparable with inorganic solar cells remain challenging because of high recombination losses, which empirically limit the open-circuit voltage (Voc) to typically less than 1 V. Here we show that this empirical limit can be overcome using non-fullerene acceptors blended with the low band gap polymer PffBT4T-2DT leading to efficiencies approaching 10% (9.95%). We achieve Voc up to 1.12 V, which corresponds to a loss of only Eg/q − Voc = 0.5 ± 0.01 V between the optical bandgap Eg of the polymer and Voc. This high Voc is shown to be associated with the achievement of remarkably low non-geminate and non-radiative recombination losses in these devices. Suppression of non-radiative recombination implies high external electroluminescence quantum efficiencies which are orders of magnitude higher than those of equivalent devices employing fullerene acceptors. Using the balance between reduced recombination losses and good photocurrent generation efficiencies achieved experimentally as a baseline for simulations of the efficiency potential of organic solar cells, we estimate that efficiencies of up to 20% are achievable if band gaps and fill factors are further optimized.

Journal ArticleDOI
TL;DR: A comprehensive transcriptional analysis of 460 early-stage urothelial carcinomas revealed frequent mutations in genes encoding proteins involved in chromatin organization and cytoskeletal functions and suggested the identification of subclasses in NMIBC may offer better prognostication and treatment selection based on subclass assignment.

Journal ArticleDOI
TL;DR: The biology of phosphatidylserine is discussed with respect to its role as a global immunosuppressive signal and how PS is exploited to drive diverse pathological processes such as infection and cancer.
Abstract: Apoptosis is an evolutionarily conserved and tightly regulated cell death modality. It serves important roles in physiology by sculpting complex tissues during embryogenesis and by removing effete cells that have reached advanced age or whose genomes have been irreparably damaged. Apoptosis culminates in the rapid and decisive removal of cell corpses by efferocytosis, a term used to distinguish the engulfment of apoptotic cells from other phagocytic processes. Over the past decades, the molecular and cell biological events associated with efferocytosis have been rigorously studied, and many eat-me signals and receptors have been identified. The externalization of phosphatidylserine (PS) is arguably the most emblematic eat-me signal that is in turn bound by a large number of serum proteins and opsonins that facilitate efferocytosis. Under physiological conditions, externalized PS functions as a dominant and evolutionarily conserved immunosuppressive signal that promotes tolerance and prevents local and systemic immune activation. Pathologically, the innate immunosuppressive effect of externalized PS has been hijacked by numerous viruses, microorganisms, and parasites to facilitate infection, and in many cases, establish infection latency. PS is also profoundly dysregulated in the tumor microenvironment and antagonizes the development of tumor immunity. In this review, we discuss the biology of PS with respect to its role as a global immunosuppressive signal and how PS is exploited to drive diverse pathological processes such as infection and cancer. Finally, we outline the rationale that agents targeting PS could have significant value in cancer and infectious disease therapeutics.

Journal ArticleDOI
TL;DR: High REC and RLC, low LDH, and absence of metastasis other than soft-tissue/lung are independent baseline characteristics associated with favorable OS of patients with melanoma treated with pembrolizumab, indicating a subgroup with excellent prognosis.
Abstract: Purpose: Biomarkers for outcome after immune-checkpoint blockade are strongly needed as these may influence individual treatment selection or sequence. We aimed to identify baseline factors associated with overall survival (OS) following pembrolizumab treatment in melanoma patients. Experimental design: Serum lactate dehydrogenase (LDH), routine blood count parameters, and clinical characteristics were investigated in 616 patients. Endpoints were OS and best overall response following pembrolizumab. Kaplan-Meier analysis and Cox regression were applied for survival analysis. Results: Relative eosinophil count (REC) {greater than or equal to}1.5%, relative lymphocyte count (RLC) {greater than or equal to}17.5%, {less than or equal to}2.5-fold elevation of LDH, and the absence of metastasis other than soft-tissue/lung were associated with favorable OS in the discovery (n=177) and the confirmation (n=182) cohort and had independent positive impact (all P<0.001). Their independent role was subsequently confirmed in the validation cohort (n=257; all P<0.01). The number of favorable factors was strongly associated with prognosis. One-year-OS probabilities of 83.9% vs 14.7% and response rates of 58.3% vs 3.3% were observed in patients with four out of four compared to those with none out of four favorable baseline factors present, respectively. Conclusions: High REC and RLC, low LDH, and absence of metastasis other than soft-tissue/lung are independent baseline characteristics associated with favorable OS of patients with melanoma treated with pembrolizumab. Presence of four favorable factors in combination identifies a subgroup with excellent prognosis. In contrast, patients with no favorable factors present have a poor prognosis, despite pembrolizumab, and additional treatment advances are still needed. A potential predictive impact needs to be further investigated.