scispace - formally typeset
Search or ask a question

Showing papers by "University of Tokyo published in 2018"


Journal ArticleDOI
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4  +1025 moreInstitutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).

5,211 citations


Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4  +183 moreInstitutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.

3,301 citations


Journal ArticleDOI
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1  +1050 moreInstitutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.

2,910 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1235 moreInstitutions (132)
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.

1,595 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: MCD-DA as discussed by the authors aligns distributions of source and target by utilizing the task-specific decision boundaries between classes to detect target samples that are far from the support of the source.
Abstract: In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https://github.com/mil-tokyo/MCD_DA

1,537 citations


Journal ArticleDOI
Mary F. Feitosa1, Aldi T. Kraja1, Daniel I. Chasman2, Yun J. Sung1  +296 moreInstitutions (86)
18 Jun 2018-PLOS ONE
TL;DR: In insights into the role of alcohol consumption in the genetic architecture of hypertension, a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions is conducted.
Abstract: Heavy alcohol consumption is an established risk factor for hypertension; the mechanism by which alcohol consumption impact blood pressure (BP) regulation remains unknown. We hypothesized that a genome-wide association study accounting for gene-alcohol consumption interaction for BP might identify additional BP loci and contribute to the understanding of alcohol-related BP regulation. We conducted a large two-stage investigation incorporating joint testing of main genetic effects and single nucleotide variant (SNV)-alcohol consumption interactions. In Stage 1, genome-wide discovery meta-analyses in ≈131K individuals across several ancestry groups yielded 3,514 SNVs (245 loci) with suggestive evidence of association (P < 1.0 x 10-5). In Stage 2, these SNVs were tested for independent external replication in ≈440K individuals across multiple ancestries. We identified and replicated (at Bonferroni correction threshold) five novel BP loci (380 SNVs in 21 genes) and 49 previously reported BP loci (2,159 SNVs in 109 genes) in European ancestry, and in multi-ancestry meta-analyses (P < 5.0 x 10-8). For African ancestry samples, we detected 18 potentially novel BP loci (P < 5.0 x 10-8) in Stage 1 that warrant further replication. Additionally, correlated meta-analysis identified eight novel BP loci (11 genes). Several genes in these loci (e.g., PINX1, GATA4, BLK, FTO and GABBR2) have been previously reported to be associated with alcohol consumption. These findings provide insights into the role of alcohol consumption in the genetic architecture of hypertension.

1,218 citations


Journal ArticleDOI
08 Feb 2018-Nature
TL;DR: The measurement of high-performance plasma amyloid-β biomarkers by immunoprecipitation coupled with mass spectrometry demonstrates the potential clinical utility of plasma biomarkers in predicting brain amyloids-β burden at an individual level and shows cost–benefit and scalability advantages over current techniques.
Abstract: To facilitate clinical trials of disease-modifying therapies for Alzheimer's disease, which are expected to be most efficacious at the earliest and mildest stages of the disease, supportive biomarker information is necessary. The only validated methods for identifying amyloid-β deposition in the brain-the earliest pathological signature of Alzheimer's disease-are amyloid-β positron-emission tomography (PET) imaging or measurement of amyloid-β in cerebrospinal fluid. Therefore, a minimally invasive, cost-effective blood-based biomarker is desirable. Despite much effort, to our knowledge, no study has validated the clinical utility of blood-based amyloid-β markers. Here we demonstrate the measurement of high-performance plasma amyloid-β biomarkers by immunoprecipitation coupled with mass spectrometry. The ability of amyloid-β precursor protein (APP)669-711/amyloid-β (Aβ)1-42 and Aβ1-40/Aβ1-42 ratios, and their composites, to predict individual brain amyloid-β-positive or -negative status was determined by amyloid-β-PET imaging and tested using two independent data sets: a discovery data set (Japan, n = 121) and a validation data set (Australia, n = 252 including 111 individuals diagnosed using 11C-labelled Pittsburgh compound-B (PIB)-PET and 141 using other ligands). Both data sets included cognitively normal individuals, individuals with mild cognitive impairment and individuals with Alzheimer's disease. All test biomarkers showed high performance when predicting brain amyloid-β burden. In particular, the composite biomarker showed very high areas under the receiver operating characteristic curves (AUCs) in both data sets (discovery, 96.7%, n = 121 and validation, 94.1%, n = 111) with an accuracy approximately equal to 90% when using PIB-PET as a standard of truth. Furthermore, test biomarkers were correlated with amyloid-β-PET burden and levels of Aβ1-42 in cerebrospinal fluid. These results demonstrate the potential clinical utility of plasma biomarkers in predicting brain amyloid-β burden at an individual level. These plasma biomarkers also have cost-benefit and scalability advantages over current techniques, potentially enabling broader clinical access and efficient population screening.

1,049 citations


Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

965 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, an approximate gradient for rasterization is proposed to enable the integration of rendering into neural networks, which enables single-image 3D mesh reconstruction with silhouette image supervision.
Abstract: For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.

919 citations


Posted Content
TL;DR: Co-teaching as discussed by the authors trains two deep neural networks simultaneously, and let them teach each other given every mini-batch: first, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this minibatch should be used for training; finally, each networks back propagates the data selected by its peer network and updates itself.
Abstract: Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training. Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels. Therefore in this paper, we propose a new deep learning paradigm called Co-teaching for combating with noisy labels. Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself. Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.

866 citations


Journal ArticleDOI
12 Jan 2018-Science
TL;DR: The pioneering work that led the gene therapy field to its current state is reviewed, gene-editing technologies that are expected to play a major role in the field's future are described, and practical challenges in getting these therapies to patients who need them are discussed.
Abstract: BACKGROUND Nearly five decades ago, visionary scientists hypothesized that genetic modification by exogenous DNA might be an effective treatment for inherited human diseases. This “gene therapy” strategy offered the theoretical advantage that a durable and possibly curative clinical benefit would be achieved by a single treatment. Although the journey from concept to clinical application has been long and tortuous, gene therapy is now bringing new treatment options to multiple fields of medicine. We review critical discoveries leading to the development of successful gene therapies, focusing on direct in vivo administration of viral vectors, adoptive transfer of genetically engineered T cells or hematopoietic stem cells, and emerging genome editing technologies. ADVANCES The development of gene delivery vectors such as replication-defective retro viruses and adeno-associated virus (AAV), coupled with encouraging results in preclinical disease models, led to the initiation of clinical trials in the early 1990s. Unfortunately, these early trials exposed serious therapy-related toxicities, including inflammatory responses to the vectors and malignancies caused by vector-mediated insertional activation of proto-oncogenes. These setbacks fueled more basic research in virology, immunology, cell biology, model development, and target disease, which ultimately led to successful clinical translation of gene therapies in the 2000s. Lentiviral vectors improved efficiency of gene transfer to nondividing cells. In early-phase clinical trials, these safer and more efficient vectors were used for transduction of autologous hematopoietic stem cells, leading to clinical benefit in patients with immunodeficiencies, hemoglobinopathies, and metabolic and storage disorders. T cells engineered to express CD19-specific chimeric antigen receptors were shown to have potent antitumor activity in patients with lymphoid malignancies. In vivo delivery of therapeutic AAV vectors to the retina, liver, and nervous system resulted in clinical improvement in patients with congenital blindness, hemophilia B, and spinal muscular atrophy, respectively. In the United States, Food and Drug Administration (FDA) approvals of the first gene therapy products occurred in 2017, including chimeric antigen receptor (CAR)–T cells to treat B cell malignancies and AAV vectors for in vivo treatment of congenital blindness. Promising clinical trial results in neuromuscular diseases and hemophilia will likely result in additional approvals in the near future. In recent years, genome editing technologies have been developed that are based on engineered or bacterial nucleases. In contrast to viral vectors, which can mediate only gene addition, genome editing approaches offer a precise scalpel for gene addition, gene ablation, and gene “correction.” Genome editing can be performed on cells ex vivo or the editing machinery can be delivered in vivo to effect in situ genome editing. Translation of these technologies to patient care is in its infancy in comparison to viral gene addition therapies, but multiple clinical genome editing trials are expected to open over the next decade. OUTLOOK Building on decades of scientific, clinical, and manufacturing advances, gene therapies have begun to improve the lives of patients with cancer and a variety of inherited genetic diseases. Partnerships with biotechnology and pharmaceutical companies with expertise in manufacturing and scale-up will be required for these therapies to have a broad impact on human disease. Many challenges remain, including understanding and preventing genotoxicity from integrating vectors or off-target genome editing, improving gene transfer or editing efficiency to levels necessary for treatment of many target diseases, preventing immune responses that limit in vivo administration of vectors or genome editing complexes, and overcoming manufacturing and regulatory hurdles. Importantly, a societal consensus must be reached on the ethics of germline genome editing in light of rapid scientific advances that have made this a real, rather than hypothetical, issue. Finally, payers and gene therapy clinicians and companies will need to work together to design and test new payment models to facilitate delivery of expensive but potentially curative therapies to patients in need. The ability of gene therapies to provide durable benefits to human health, exemplified by the scientific advances and clinical successes over the past several years, justifies continued optimism and increasing efforts toward making these therapies part of our standard treatment armamentarium for human disease.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +1135 moreInstitutions (139)
TL;DR: In this article, the authors present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves.
Abstract: We present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron star systems, which are the most promising targets for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and 90% credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5– 20 deg2 requires at least three detectors of sensitivity within a factor of ∼2 of each other and with a broad frequency bandwidth. When all detectors, including KAGRA and the third LIGO detector in India, reach design sensitivity, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone.

Journal ArticleDOI
09 Feb 2018-Science
TL;DR: It is demonstrated that molecularly tailored UCNPs can serve as optogenetic actuators of transcranial NIR light to stimulate deep brain neurons and evoked dopamine release from genetically tagged neurons in the ventral tegmental area and triggered memory recall.
Abstract: Optogenetics has revolutionized the experimental interrogation of neural circuits and holds promise for the treatment of neurological disorders It is limited, however, because visible light cannot penetrate deep inside brain tissue Upconversion nanoparticles (UCNPs) absorb tissue-penetrating near-infrared (NIR) light and emit wavelength-specific visible light Here, we demonstrate that molecularly tailored UCNPs can serve as optogenetic actuators of transcranial NIR light to stimulate deep brain neurons Transcranial NIR UCNP-mediated optogenetics evoked dopamine release from genetically tagged neurons in the ventral tegmental area, induced brain oscillations through activation of inhibitory neurons in the medial septum, silenced seizure by inhibition of hippocampal excitatory cells, and triggered memory recall UCNP technology will enable less-invasive optical neuronal activity manipulation with the potential for remote therapy

Journal ArticleDOI
TL;DR: This article presents a systematic review of artificial intelligence based system health management with an emphasis on recent trends of deep learning within the field and demonstrates plausible benefits for fault diagnosis and prognostics.

Journal ArticleDOI
21 Sep 2018-Science
TL;DR: A rationally engineered SpCas9 variant (SpCas9-NG) that can recognize relaxed NG PAMs is reported, which is a powerful addition to the CRISPR-Cas9 genome engineering toolbox and will be useful in a broad range of applications, from basic research to clinical therapeutics.
Abstract: The RNA-guided endonuclease Cas9 cleaves its target DNA and is a powerful genome-editing tool However, the widely used Streptococcus pyogenes Cas9 enzyme (SpCas9) requires an NGG protospacer adjacent motif (PAM) for target recognition, thereby restricting the targetable genomic loci Here, we report a rationally engineered SpCas9 variant (SpCas9-NG) that can recognize relaxed NG PAMs The crystal structure revealed that the loss of the base-specific interaction with the third nucleobase is compensated by newly introduced non–base-specific interactions, thereby enabling the NG PAM recognition We showed that SpCas9-NG induces indels at endogenous target sites bearing NG PAMs in human cells Furthermore, we found that the fusion of SpCas9-NG and the activation-induced cytidine deaminase (AID) mediates the C-to-T conversion at target sites with NG PAMs in human cells

Journal ArticleDOI
TL;DR: It is revealed that the free-evolution dephasing is caused by charge noise—rather than conventional magnetic noise—as highlighted by a 1/f spectrum extended over seven decades of frequency, offering a promising route to large-scale spin-qubit systems with fault-tolerant controllability.
Abstract: The isolation of qubits from noise sources, such as surrounding nuclear spins and spin–electric susceptibility 1–4 , has enabled extensions of quantum coherence times in recent pivotal advances towards the concrete implementation of spin-based quantum computation. In fact, the possibility of achieving enhanced quantum coherence has been substantially doubted for nanostructures due to the characteristic high degree of background charge fluctuations 5–7 . Still, a sizeable spin–electric coupling will be needed in realistic multiple-qubit systems to address single-spin and spin–spin manipulations 8–10 . Here, we realize a single-electron spin qubit with an isotopically enriched phase coherence time (20 μs) 11,12 and fast electrical control speed (up to 30 MHz) mediated by extrinsic spin–electric coupling. Using rapid spin rotations, we reveal that the free-evolution dephasing is caused by charge noise—rather than conventional magnetic noise—as highlighted by a 1/f spectrum extended over seven decades of frequency. The qubit exhibits superior performance with single-qubit gate fidelities exceeding 99.9% on average, offering a promising route to large-scale spin-qubit systems with fault-tolerant controllability.

Journal ArticleDOI
TL;DR: In this paper, a review of the current understanding of primordial black holes (PBHs), with particular focus on those massive examples ( ) which remain at the present epoch, not having evaporated through Hawking radiation, is presented.
Abstract: This article reviews current understanding of primordial black holes (PBHs), with particular focus on those massive examples ( ) which remain at the present epoch, not having evaporated through Hawking radiation. With the detection of gravitational waves by LIGO, we have gained a completely novel observational tool to search for PBHs, complementary to those using electromagnetic waves. Taking the perspective that gravitational-wave astronomy will make significant progress in the coming decades, the purpose of this article is to give a comprehensive review covering a wide range of topics on PBHs. After discussing PBH formation, as well as several inflation models leading to PBH production, we summarize various existing and future observational constraints. We then present topics on formation of PBH binaries, gravitational waves from PBH binaries, and various observational tests of PBHs using gravitational waves.

Proceedings ArticleDOI
01 Jan 2018
TL;DR: Empirical results on noisy versions of MNIST, CIFar-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.
Abstract: Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training. Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels. Therefore in this paper, we propose a new deep learning paradigm called ''Co-teaching'' for combating with noisy labels. Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself. Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.

Journal ArticleDOI
05 Jan 2018-Science
TL;DR: It is reported that low-molecular-weight polymers, when cross-linked by dense hydrogen bonds, yield mechanically robust yet readily repairable materials, despite their extremely slow diffusion dynamics.
Abstract: Expanding the range of healable materials is an important challenge for sustainable societies. Noncrystalline, high-molecular-weight polymers generally form mechanically robust materials, which, however, are difficult to repair once they are fractured. This is because their polymer chains are heavily entangled and diffuse too sluggishly to unite fractured surfaces within reasonable time scales. Here we report that low-molecular-weight polymers, when cross-linked by dense hydrogen bonds, yield mechanically robust yet readily repairable materials, despite their extremely slow diffusion dynamics. A key was to use thiourea, which anomalously forms a zigzag hydrogen-bonded array that does not induce unfavorable crystallization. Another key was to incorporate a structural element for activating the exchange of hydrogen-bonded pairs, which enables the fractured portions to rejoin readily upon compression.

Journal ArticleDOI
TL;DR: A second-order topological insulator in d dimensions is an insulator which has no d-1 dimensional topological boundary states but has d-2 dimensional topology boundary states, which constitutes the bulk topological index.
Abstract: A second-order topological insulator in d dimensions is an insulator which has no d-1 dimensional topological boundary states but has d-2 dimensional topological boundary states. It is an extended notion of the conventional topological insulator. Higher-order topological insulators have been investigated in square and cubic lattices. In this Letter, we generalize them to breathing kagome and pyrochlore lattices. First, we construct a second-order topological insulator on the breathing Kagome lattice. Three topological boundary states emerge at the corner of the triangle, realizing a 1/3 fractional charge at each corner. Second, we construct a third-order topological insulator on the breathing pyrochlore lattice. Four topological boundary states emerge at the corners of the tetrahedron with a 1/4 fractional charge at each corner. These higher-order topological insulators are characterized by the quantized polarization, which constitutes the bulk topological index. Finally, we study a second-order topological semimetal by stacking the breathing kagome lattice.

Journal ArticleDOI
TL;DR: Among patients with stage III colon cancer receiving adjuvant therapy with FOLFOX or CAPOX, noninferiority of 3 months of therapy, as compared with 6 months, was not confirmed in the overall population.
Abstract: Background Since 2004, a regimen of 6 months of treatment with oxaliplatin plus a fluoropyrimidine has been standard adjuvant therapy in patients with stage III colon cancer. However, since oxaliplatin is associated with cumulative neurotoxicity, a shorter duration of therapy could spare toxic effects and health expenditures. Methods We performed a prospective, preplanned, pooled analysis of six randomized, phase 3 trials that were conducted concurrently to evaluate the noninferiority of adjuvant therapy with either FOLFOX (fluorouracil, leucovorin, and oxaliplatin) or CAPOX (capecitabine and oxaliplatin) administered for 3 months, as compared with 6 months. The primary end point was the rate of disease-free survival at 3 years. Noninferiority of 3 months versus 6 months of therapy could be claimed if the upper limit of the two-sided 95% confidence interval of the hazard ratio did not exceed 1.12. Results After 3263 events of disease recurrence or death had been reported in 12,834 patients, the n...

Journal ArticleDOI
01 Sep 2018-Nature
TL;DR: Self-powered ultra-flexible electronic devices that can measure biometric signals with very high signal-to-noise ratios when applied to skin or other tissue are realized and offer a general platform for next-generation self-powered electronics.
Abstract: Next-generation biomedical devices1-9 will need to be self-powered and conformable to human skin or other tissue. Such devices would enable the accurate and continuous detection of physiological signals without the need for an external power supply or bulky connecting wires. Self-powering functionality could be provided by flexible photovoltaics that can adhere to moveable and complex three-dimensional biological tissues1-4 and skin5-9. Ultra-flexible organic power sources10-13 that can be wrapped around an object have proven mechanical and thermal stability in long-term operation13, making them potentially useful in human-compatible electronics. However, the integration of these power sources with functional electric devices including sensors has not yet been demonstrated because of their unstable output power under mechanical deformation and angular change. Also, it will be necessary to minimize high-temperature and energy-intensive processes10,12 when fabricating an integrated power source and sensor, because such processes can damage the active material of the functional device and deform the few-micrometre-thick polymeric substrates. Here we realize self-powered ultra-flexible electronic devices that can measure biometric signals with very high signal-to-noise ratios when applied to skin or other tissue. We integrated organic electrochemical transistors used as sensors with organic photovoltaic power sources on a one-micrometre-thick ultra-flexible substrate. A high-throughput room-temperature moulding process was used to form nano-grating morphologies (with a periodicity of 760 nanometres) on the charge transporting layers. This substantially increased the efficiency of the organophotovoltaics, giving a high power-conversion efficiency that reached 10.5 per cent and resulted in a high power-per-weight value of 11.46 watts per gram. The organic electrochemical transistors exhibited a transconductance of 0.8 millisiemens and fast responsivity above one kilohertz under physiological conditions, which resulted in a maximum signal-to-noise ratio of 40.02 decibels for cardiac signal detection. Our findings offer a general platform for next-generation self-powered electronics.

Journal ArticleDOI
TL;DR: An update for the MAFFT multiple sequence alignment program is reported to enable parallel calculation of large numbers of sequences, and introduces a scalable variant, G-large-INS-1, which has equivalent accuracy to G- INS-1 and is applicable to 50 000 or more sequences.
Abstract: Summary We report an update for the MAFFT multiple sequence alignment program to enable parallel calculation of large numbers of sequences. The G-INS-1 option of MAFFT was recently reported to have higher accuracy than other methods for large data, but this method has been impractical for most large-scale analyses, due to the requirement of large computational resources. We introduce a scalable variant, G-large-INS-1, which has equivalent accuracy to G-INS-1 and is applicable to 50 000 or more sequences. Availability and implementation This feature is available in MAFFT versions 7.355 or later at https://mafft.cbrc.jp/alignment/software/mpi.html. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
26 Jan 2018-Science
TL;DR: An efficient resonantly driven CNOT gate for electron spins in silicon is demonstrated and used to create an entangled quantum state called the Bell state with 78% fidelity, which enables multi-qubit algorithms in silicon.
Abstract: Single-qubit rotations and two-qubit CNOT operations are crucial ingredients for universal quantum computing. Although high-fidelity single-qubit operations have been achieved using the electron spin degree of freedom, realizing a robust CNOT gate has been challenging because of rapid nuclear spin dephasing and charge noise. We demonstrate an efficient resonantly driven CNOT gate for electron spins in silicon. Our platform achieves single-qubit rotations with fidelities greater than 99%, as verified by randomized benchmarking. Gate control of the exchange coupling allows a quantum CNOT gate to be implemented with resonant driving in ~200 nanoseconds. We used the CNOT gate to generate a Bell state with 78% fidelity (corrected for errors in state preparation and measurement). Our quantum dot device architecture enables multi-qubit algorithms in silicon.

Journal ArticleDOI
TL;DR: In this article, a coherent framework of topological phases of non-Hermitian Hamiltonians was developed, and the K-theory was applied to systematically classify all the topology phases in the Altland-Zirnbauer classes in all dimensions.
Abstract: Recent experimental advances in controlling dissipation have brought about unprecedented flexibility in engineering non-Hermitian Hamiltonians in open classical and quantum systems. A particular interest centers on the topological properties of non-Hermitian systems, which exhibit unique phases with no Hermitian counterparts. However, no systematic understanding in analogy with the periodic table of topological insulators and superconductors has been achieved. In this paper, we develop a coherent framework of topological phases of non-Hermitian systems. After elucidating the physical meaning and the mathematical definition of non-Hermitian topological phases, we start with one-dimensional lattices, which exhibit topological phases with no Hermitian counterparts and are found to be characterized by an integer topological winding number even with no symmetry constraint, reminiscent of the quantum Hall insulator in Hermitian systems. A system with a nonzero winding number, which is experimentally measurable from the wave-packet dynamics, is shown to be robust against disorder, a phenomenon observed in the Hatano-Nelson model with asymmetric hopping amplitudes. We also unveil a novel bulk-edge correspondence that features an infinite number of (quasi-)edge modes. We then apply the K-theory to systematically classify all the non-Hermitian topological phases in the Altland-Zirnbauer classes in all dimensions. The obtained periodic table unifies time-reversal and particle-hole symmetries, leading to highly nontrivial predictions such as the absence of non-Hermitian topological phases in two dimensions. We provide concrete examples for all the nontrivial non-Hermitian AZ classes in zero and one dimensions. In particular, we identify a Z2 topological index for arbitrary quantum channels. Our work lays the cornerstone for a unified understanding of the role of topology in non-Hermitian systems.

Journal ArticleDOI
TL;DR: It is demonstrated that even without prior biological knowledge of cross-phenotype relationships, genetics corresponding to clinical measurements successfully recapture those measurements’ relevance to diseases, and thus can contribute to the elucidation of unknown etiology and pathogenesis.
Abstract: Clinical measurements can be viewed as useful intermediate phenotypes to promote understanding of complex human diseases. To acquire comprehensive insights into the underlying genetics, here we conducted a genome-wide association study (GWAS) of 58 quantitative traits in 162,255 Japanese individuals. Overall, we identified 1,407 trait-associated loci (P < 5.0 × 10−8), 679 of which were novel. By incorporating 32 additional GWAS results for complex diseases and traits in Japanese individuals, we further highlighted pleiotropy, genetic correlations, and cell-type specificity across quantitative traits and diseases, which substantially expands the current understanding of the associated genetics and biology. This study identified both shared polygenic effects and cell-type specificity, represented by the genetic links among clinical measurements, complex diseases, and relevant cell types. Our findings demonstrate that even without prior biological knowledge of cross-phenotype relationships, genetics corresponding to clinical measurements successfully recapture those measurements’ relevance to diseases, and thus can contribute to the elucidation of unknown etiology and pathogenesis. A genome-wide association study (GWAS) of 58 traits using data from the Biobank Japan Project identifies 1,407 loci, 679 of which are novel. Comparison with disease GWASs and analysis of genetic correlations and cell-type enrichment show that these clinical measurements are relevant to human disease.

Book
27 Mar 2018
TL;DR: Imitation learning as discussed by the authors is a generalization of reinforcement learning, where a teacher can demonstrate a desired behavior rather than attempting to manually engineer it, which is referred to as imitation learning.
Abstract: As robots and other intelligent agents move from simple environments and problems to more complex, unstructured settings, manually programming their behavior has become increasingly challenging and expensive. Often, it is easier for a teacher to demonstrate a desired behavior rather than attempt to manually engineer it. This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning. It covers the underlying assumptions, approaches, and how they relate; the rich set of algorithms developed to tackle the problem; and advice on effective tools and implementation. We intend this paper to serve two audiences. First, we want to familiarize machine learning experts with the challenges of imitation learning, particularly those arising in robotics, and the interesting theoretical and practical distinctions between it and more familiar frameworks like statistical supervised learning theory and reinforcement learning. Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning. We pay particular attention to the intimate connection between imitation learning approaches and those of structured prediction Daume III et al. [2009]. To structure this discussion, we categorize imitation learning techniques based on the following key criteria which drive algorithmic decisions: 1) The structure of the policy space. Is the learned policy a time-index trajectory (trajectory learning), a mapping from observations to actions (so called behavioral cloning [Bain and Sammut, 1996]), or the result of a complex optimization or planning problem at each execution as is common in inverse optimal control methods [Kalman, 1964, Moylan and Anderson, 1973]. 2) The information available during training and testing. In particular, is the learning algorithm privy to the full state that the teacher possess? Is the learner able to interact with the teacher and gather corrections or more data? Does the learner have a (typically a priori) model of the system with which it interacts? Does the learner have access to the reward (cost) function that the teacher is attempting to optimize? 3) The notion of success. Different algorithmic approaches provide varying guarantees on the resulting learned behavior. These guarantees range from weaker (e.g., measuring disagreement with the agent’s decision) to stronger (e.g., providing guarantees on the performance of the learner with respect to a true cost function, either known or unknown). We organize our work by paying particular attention to distinction (1): dividing imitation learning into directly replicating desired behavior (sometimes called behavioral cloning) and learning the hidden objectives of the desired behavior from demonstrations (called inverse optimal control or inverse reinforcement learning [Russell, 1998]). In the latter case, behavior arises as the result of an optimization problem solved for each new instance that the learner faces. In addition to method analysis, we discuss the design decisions a practitioner must make when selecting an imitation learning approach. Moreover, application examples—such as robots that play table tennis [Kober and Peters, 2009], programs that play the game of Go [Silver et al., 2016], and systems that understand natural language [Wen et al., 2015]— illustrate the properties and motivations behind different forms of imitation learning. We conclude by presenting a set of open questions and point towards possible future research directions for machine learning.

Journal ArticleDOI
TL;DR: In this paper, a salt-concentrated electrolyte design was proposed to solve the problem of carbonate anodes becoming flammable and volatile, which may cause catastrophic fires or explosions.
Abstract: Severe safety concerns are impeding the large-scale employment of lithium/sodium batteries. Conventional electrolytes are highly flammable and volatile, which may cause catastrophic fires or explosions. Efforts to introduce flame-retardant solvents into the electrolytes have generally resulted in compromised battery performance because those solvents do not suitably passivate carbonaceous anodes. Here we report a salt-concentrated electrolyte design to resolve this dilemma via the spontaneous formation of a robust inorganic passivation film on the anode. We demonstrate that a concentrated electrolyte using a salt and a popular flame-retardant solvent (trimethyl phosphate), without any additives or soft binders, allows stable charge–discharge cycling of both hard-carbon and graphite anodes for more than 1,000 cycles (over one year) with negligible degradation; this performance is comparable or superior to that of conventional flammable carbonate electrolytes. The unusual passivation character of the concentrated electrolyte coupled with its fire-extinguishing property contributes to developing safe and long-lasting batteries, unlocking the limit toward development of much higher energy-density batteries.

Journal ArticleDOI
TL;DR: In this paper, a coherent framework of topological phases of non-Hermitian Hamiltonians was developed, and the K-theory was applied to systematically classify all the topology phases in the Altland-Zirnbauer classes in all dimensions.
Abstract: Recent experimental advances in controlling dissipation have brought about unprecedented flexibility in engineering non-Hermitian Hamiltonians in open classical and quantum systems. A particular interest centers on the topological properties of non-Hermitian systems, which exhibit unique phases with no Hermitian counterparts. However, no systematic understanding in analogy with the periodic table of topological insulators and superconductors has been achieved. In this paper, we develop a coherent framework of topological phases of non-Hermitian systems. After elucidating the physical meaning and the mathematical definition of non-Hermitian topological phases, we start with one-dimensional lattices, which exhibit topological phases with no Hermitian counterparts and are found to be characterized by an integer topological winding number even with no symmetry constraint, reminiscent of the quantum Hall insulator in Hermitian systems. A system with a nonzero winding number, which is experimentally measurable from the wave-packet dynamics, is shown to be robust against disorder, a phenomenon observed in the Hatano-Nelson model with asymmetric hopping amplitudes. We also unveil a novel bulk-edge correspondence that features an infinite number of (quasi-)edge modes. We then apply the K-theory to systematically classify all the non-Hermitian topological phases in the Altland-Zirnbauer classes in all dimensions. The obtained periodic table unifies time-reversal and particle-hole symmetries, leading to highly nontrivial predictions such as the absence of non-Hermitian topological phases in two dimensions. We provide concrete examples for all the nontrivial non-Hermitian AZ classes in zero and one dimensions. In particular, we identify a Z2 topological index for arbitrary quantum channels. Our work lays the cornerstone for a unified understanding of the role of topology in non-Hermitian systems.

Journal ArticleDOI
TL;DR: The opportunities enabled by recent advances in synthetic approaches for design of both local and overall structure, state-of-the-art characterization techniques to distinguish unique structural and chemical states, and chemical/physical properties emerging from the synergy of multiple anions for catalysis, energy conversion, and electronic materials are discussed.
Abstract: During the last century, inorganic oxide compounds laid foundations for materials synthesis, characterization, and technology translation by adding new functions into devices previously dominated by main-group element semiconductor compounds. Today, compounds with multiple anions beyond the single-oxide ion, such as oxyhalides and oxyhydrides, offer a new materials platform from which superior functionality may arise. Here we review the recent progress, status, and future prospects and challenges facing the development and deployment of mixed-anion compounds, focusing mainly on oxide-derived materials. We devote attention to the crucial roles that multiple anions play during synthesis, characterization, and in the physical properties of these materials. We discuss the opportunities enabled by recent advances in synthetic approaches for design of both local and overall structure, state-of-the-art characterization techniques to distinguish unique structural and chemical states, and chemical/physical properties emerging from the synergy of multiple anions for catalysis, energy conversion, and electronic materials.