scispace - formally typeset
Search or ask a question

Showing papers by "Massachusetts Institute of Technology published in 2018"


Journal ArticleDOI
05 Mar 2018-Nature
TL;DR: The realization of intrinsic unconventional superconductivity is reported—which cannot be explained by weak electron–phonon interactions—in a two-dimensional superlattice created by stacking two sheets of graphene that are twisted relative to each other by a small angle.
Abstract: The behaviour of strongly correlated materials, and in particular unconventional superconductors, has been studied extensively for decades, but is still not well understood. This lack of theoretical understanding has motivated the development of experimental techniques for studying such behaviour, such as using ultracold atom lattices to simulate quantum materials. Here we report the realization of intrinsic unconventional superconductivity-which cannot be explained by weak electron-phonon interactions-in a two-dimensional superlattice created by stacking two sheets of graphene that are twisted relative to each other by a small angle. For twist angles of about 1.1°-the first 'magic' angle-the electronic band structure of this 'twisted bilayer graphene' exhibits flat bands near zero Fermi energy, resulting in correlated insulating states at half-filling. Upon electrostatic doping of the material away from these correlated insulating states, we observe tunable zero-resistance states with a critical temperature of up to 1.7 kelvin. The temperature-carrier-density phase diagram of twisted bilayer graphene is similar to that of copper oxides (or cuprates), and includes dome-shaped regions that correspond to superconductivity. Moreover, quantum oscillations in the longitudinal resistance of the material indicate the presence of small Fermi surfaces near the correlated insulating states, in analogy with underdoped cuprates. The relatively high superconducting critical temperature of twisted bilayer graphene, given such a small Fermi surface (which corresponds to a carrier density of about 1011 per square centimetre), puts it among the superconductors with the strongest pairing strength between electrons. Twisted bilayer graphene is a precisely tunable, purely carbon-based, two-dimensional superconductor. It is therefore an ideal material for investigations of strongly correlated phenomena, which could lead to insights into the physics of high-critical-temperature superconductors and quantum spin liquids.

5,613 citations


Journal ArticleDOI
09 Mar 2018-Science
TL;DR: A large-scale analysis of tweets reveals that false rumors spread further and faster than the truth, and false news was more novel than true news, which suggests that people were more likely to share novel information.
Abstract: We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

4,241 citations


Proceedings Article
15 Feb 2018
TL;DR: This article studied the adversarial robustness of neural networks through the lens of robust optimization and identified methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Abstract: Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples—inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL.

3,581 citations


Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Ilio Vitale3, Stuart A. Aaronson4  +183 moreInstitutions (111)
TL;DR: The Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives.
Abstract: Over the past decade, the Nomenclature Committee on Cell Death (NCCD) has formulated guidelines for the definition and interpretation of cell death from morphological, biochemical, and functional perspectives. Since the field continues to expand and novel mechanisms that orchestrate multiple cell death pathways are unveiled, we propose an updated classification of cell death subroutines focusing on mechanistic and essential (as opposed to correlative and dispensable) aspects of the process. As we provide molecularly oriented definitions of terms including intrinsic apoptosis, extrinsic apoptosis, mitochondrial permeability transition (MPT)-driven necrosis, necroptosis, ferroptosis, pyroptosis, parthanatos, entotic cell death, NETotic cell death, lysosome-dependent cell death, autophagy-dependent cell death, immunogenic cell death, cellular senescence, and mitotic catastrophe, we discuss the utility of neologisms that refer to highly specialized instances of these processes. The mission of the NCCD is to provide a widely accepted nomenclature on cell death in support of the continued development of the field.

3,301 citations


Journal ArticleDOI
17 Apr 2018-Immunity
TL;DR: An extensive immunogenomic analysis of more than 10,000 tumors comprising 33 diverse cancer types by utilizing data compiled by TCGA identifies six immune subtypes that encompass multiple cancer types and are hypothesized to define immune response patterns impacting prognosis.

3,246 citations


Journal ArticleDOI
TL;DR: The Places Database is described, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world, using the state-of-the-art Convolutional Neural Networks as baselines, that significantly outperform the previous approaches.
Abstract: The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems.

3,215 citations


Journal ArticleDOI
05 Mar 2018-Nature
TL;DR: It is shown experimentally that when this angle is close to the ‘magic’ angle the electronic band structure near zero Fermi energy becomes flat, owing to strong interlayer coupling, and these flat bands exhibit insulating states at half-filling, which are not expected in the absence of correlations between electrons.
Abstract: A van der Waals heterostructure is a type of metamaterial that consists of vertically stacked two-dimensional building blocks held together by the van der Waals forces between the layers. This design means that the properties of van der Waals heterostructures can be engineered precisely, even more so than those of two-dimensional materials. One such property is the 'twist' angle between different layers in the heterostructure. This angle has a crucial role in the electronic properties of van der Waals heterostructures, but does not have a direct analogue in other types of heterostructure, such as semiconductors grown using molecular beam epitaxy. For small twist angles, the moire pattern that is produced by the lattice misorientation between the two-dimensional layers creates long-range modulation of the stacking order. So far, studies of the effects of the twist angle in van der Waals heterostructures have concentrated mostly on heterostructures consisting of monolayer graphene on top of hexagonal boron nitride, which exhibit relatively weak interlayer interaction owing to the large bandgap in hexagonal boron nitride. Here we study a heterostructure consisting of bilayer graphene, in which the two graphene layers are twisted relative to each other by a certain angle. We show experimentally that, as predicted theoretically, when this angle is close to the 'magic' angle the electronic band structure near zero Fermi energy becomes flat, owing to strong interlayer coupling. These flat bands exhibit insulating states at half-filling, which are not expected in the absence of correlations between electrons. We show that these correlated states at half-filling are consistent with Mott-like insulator states, which can arise from electrons being localized in the superlattice that is induced by the moire pattern. These properties of magic-angle-twisted bilayer graphene heterostructures suggest that these materials could be used to study other exotic many-body quantum phases in two dimensions in the absence of a magnetic field. The accessibility of the flat bands through electrical tunability and the bandwidth tunability through the twist angle could pave the way towards more exotic correlated systems, such as unconventional superconductors and quantum spin liquids.

3,005 citations


Journal ArticleDOI
TL;DR: By parsing the unique classes and subclasses of tumor immune microenvironment (TIME) that exist within a patient’s tumor, the ability to predict and guide immunotherapeutic responsiveness will improve, and new therapeutic targets will be revealed.
Abstract: The clinical successes in immunotherapy have been both astounding and at the same time unsatisfactory. Countless patients with varied tumor types have seen pronounced clinical response with immunotherapeutic intervention; however, many more patients have experienced minimal or no clinical benefit when provided the same treatment. As technology has advanced, so has the understanding of the complexity and diversity of the immune context of the tumor microenvironment and its influence on response to therapy. It has been possible to identify different subclasses of immune environment that have an influence on tumor initiation and response and therapy; by parsing the unique classes and subclasses of tumor immune microenvironment (TIME) that exist within a patient's tumor, the ability to predict and guide immunotherapeutic responsiveness will improve, and new therapeutic targets will be revealed.

2,920 citations


Book
16 Feb 2018
TL;DR: Six perspectives of fit are identified, each implying distinct theoretical meanings and requiring the use of specific analytical schemes, and explicit links between theoretical propositions and operational tests are argued for.
Abstract: This article develops a conceptual framework and identifies six perspectives of fit—fit as moderation, fit as mediation, fit as matching, fit as gestalts, fit as profile deviation, and fit as covariation—each implying distinct theoretical meanings and requiring the use of specific analytical schemes. These six perspectives highlight the isomorphic nature of the correspondence between a particular concept and its subsequent testing scheme(s), but it appears that researchers have used these perspectives interchangeably, often invoking one perspective in the theoretical discussion while employing another in the empirical research. Because such research practices weaken the critical link between theory development and theory testing, this article argues for explicit links between theoretical propositions and operational tests.

2,520 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the current state-of-the-art of CO2 capture, transport, utilisation and storage from a multi-scale perspective, moving from the global to molecular scales.
Abstract: Carbon capture and storage (CCS) is broadly recognised as having the potential to play a key role in meeting climate change targets, delivering low carbon heat and power, decarbonising industry and, more recently, its ability to facilitate the net removal of CO2 from the atmosphere. However, despite this broad consensus and its technical maturity, CCS has not yet been deployed on a scale commensurate with the ambitions articulated a decade ago. Thus, in this paper we review the current state-of-the-art of CO2 capture, transport, utilisation and storage from a multi-scale perspective, moving from the global to molecular scales. In light of the COP21 commitments to limit warming to less than 2 °C, we extend the remit of this study to include the key negative emissions technologies (NETs) of bioenergy with CCS (BECCS), and direct air capture (DAC). Cognisant of the non-technical barriers to deploying CCS, we reflect on recent experience from the UK's CCS commercialisation programme and consider the commercial and political barriers to the large-scale deployment of CCS. In all areas, we focus on identifying and clearly articulating the key research challenges that could usefully be addressed in the coming decade.

2,088 citations


Journal ArticleDOI
Naomi R. Wray1, Stephan Ripke2, Stephan Ripke3, Stephan Ripke4  +259 moreInstitutions (79)
TL;DR: A genome-wide association meta-analysis of individuals with clinically assessed or self-reported depression identifies 44 independent and significant loci and finds important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia.
Abstract: Major depressive disorder (MDD) is a common illness accompanied by considerable morbidity, mortality, costs, and heightened risk of suicide. We conducted a genome-wide association meta-analysis based in 135,458 cases and 344,901 controls and identified 44 independent and significant loci. The genetic findings were associated with clinical features of major depression and implicated brain regions exhibiting anatomical differences in cases. Targets of antidepressant medications and genes involved in gene splicing were enriched for smaller association signal. We found important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia: lower educational attainment and higher body mass were putatively causal, whereas major depression and schizophrenia reflected a partly shared biological etiology. All humans carry lesser or greater numbers of genetic risk factors for major depression. These findings help refine the basis of major depression and imply that a continuous measure of risk underlies the clinical phenotype.

Proceedings Article
01 Oct 2018
TL;DR: In this paper, the expressive power of GNNs to capture different graph structures is analyzed and a simple architecture for graph representation learning is proposed. But the results characterize the discriminative power of popular GNN variants and show that they cannot learn to distinguish certain simple graph structures.
Abstract: Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.

Posted Content
TL;DR: This work identifies obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples, and develops attack techniques to overcome this effect.
Abstract: We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1235 moreInstitutions (132)
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.

Journal ArticleDOI
27 Jul 2018-Science
TL;DR: It is postulated that super-enhancers are phase-separated multimolecular assemblies, also known as biomolecular condensates, which provide a means to compartmentalize and concentrate biochemical reactions within cells.
Abstract: Super-enhancers (SEs) are clusters of enhancers that cooperatively assemble a high density of transcriptional apparatus to drive robust expression of genes with prominent roles in cell identity. Here, we demonstrate that the SE-enriched transcriptional coactivators BRD4 and MED1 form nuclear puncta at SEs that exhibit properties of liquid-like condensates and are disrupted by chemicals that perturb condensates. The intrinsically disordered regions (IDRs) of BRD4 and MED1 can form phase-separated droplets and MED1-IDR droplets can compartmentalize and concentrate transcription apparatus from nuclear extracts. These results support the idea that coactivators form phase-separated condensates at SEs that compartmentalize and concentrate the transcription apparatus, suggest a role for coactivator IDRs in this process, and offer insights into mechanisms involved in control of key cell identity genes.

Journal ArticleDOI
27 Apr 2018-Science
TL;DR: ShERLOCK as discussed by the authors is a platform that combines isothermal preamplification with Cas13 to detect single molecules of RNA or DNA, which can detect Dengue or Zika virus single-stranded RNA and mutations in patient liquid biopsy samples via lateral flow.
Abstract: Rapid detection of nucleic acids is integral for clinical diagnostics and biotechnological applications. We recently developed a platform termed SHERLOCK (specific high-sensitivity enzymatic reporter unlocking) that combines isothermal preamplification with Cas13 to detect single molecules of RNA or DNA. Through characterization of CRISPR enzymology and application development, we report here four advances integrated into SHERLOCK version 2 (SHERLOCKv2) (i) four-channel single-reaction multiplexing with orthogonal CRISPR enzymes; (ii) quantitative measurement of input as low as 2 attomolar; (iii) 3.5-fold increase in signal sensitivity by combining Cas13 with Csm6, an auxiliary CRISPR-associated enzyme; and (iv) lateral-flow readout. SHERLOCKv2 can detect Dengue or Zika virus single-stranded RNA as well as mutations in patient liquid biopsy samples via lateral flow, highlighting its potential as a multiplexable, portable, rapid, and quantitative detection platform of nucleic acids.

Proceedings Article
02 Dec 2018
TL;DR: ProxylessNAS is presented, which can directly learn the architectures for large-scale target tasks and target hardware platforms and apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.
Abstract: Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. $10^4$ GPU hours) makes it difficult to \emph{directly} search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize~\emph{proxy} tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present \emph{ProxylessNAS} that can \emph{directly} learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08\% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6$\times$ fewer parameters. On ImageNet, our model achieves 3.1\% better top-1 accuracy than MobileNetV2, while being 1.2$\times$ faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.

Journal ArticleDOI
TL;DR: Four scientists have been asked for their opinions on the role of EMT in cancer and the challenges faced by scientists working in this fast-moving field.
Abstract: Similar to embryonic development, changes in cell phenotypes defined as an epithelial to mesenchymal transition (EMT) have been shown to play a role in the tumorigenic process. Although the first description of EMT in cancer was in cell cultures, evidence for its role in vivo is now widely reported but also actively debated. Moreover, current research has exemplified just how complex this phenomenon is in cancer, leaving many exciting, open questions for researchers to answer in the future. With these points in mind, we asked four scientists for their opinions on the role of EMT in cancer and the challenges faced by scientists working in this fast-moving field.

Journal ArticleDOI
01 Jun 2018-Nature
TL;DR: 3D printing of programmed ferromagnetic domains in soft materials that enable fast transformations between complex 3D shapes via magnetic actuation are reported, enabling a set of previously inaccessible modes of transformation, such as remotely controlled auxetic behaviours of mechanical metamaterials with negative Poisson’s ratios.
Abstract: Soft materials capable of transforming between three-dimensional (3D) shapes in response to stimuli such as light, heat, solvent, electric and magnetic fields have applications in diverse areas such as flexible electronics1,2, soft robotics3,4 and biomedicine5–7. In particular, magnetic fields offer a safe and effective manipulation method for biomedical applications, which typically require remote actuation in enclosed and confined spaces8–10. With advances in magnetic field control 11 , magnetically responsive soft materials have also evolved from embedding discrete magnets 12 or incorporating magnetic particles 13 into soft compounds to generating nonuniform magnetization profiles in polymeric sheets14,15. Here we report 3D printing of programmed ferromagnetic domains in soft materials that enable fast transformations between complex 3D shapes via magnetic actuation. Our approach is based on direct ink writing 16 of an elastomer composite containing ferromagnetic microparticles. By applying a magnetic field to the dispensing nozzle while printing 17 , we reorient particles along the applied field to impart patterned magnetic polarity to printed filaments. This method allows us to program ferromagnetic domains in complex 3D-printed soft materials, enabling a set of previously inaccessible modes of transformation, such as remotely controlled auxetic behaviours of mechanical metamaterials with negative Poisson’s ratios. The actuation speed and power density of our printed soft materials with programmed ferromagnetic domains are orders of magnitude greater than existing 3D-printed active materials. We further demonstrate diverse functions derived from complex shape changes, including reconfigurable soft electronics, a mechanical metamaterial that can jump and a soft robot that crawls, rolls, catches fast-moving objects and transports a pharmaceutical dose.

Journal ArticleDOI
TL;DR: A crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials.
Abstract: The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 10^{4} data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

Journal ArticleDOI
31 Oct 2018-Nature
TL;DR: This study establishes a combined transcriptomic and projectional taxonomy of cortical cell types from functionally distinct areas of the adult mouse cortex and identifies 133 transcriptomic types of glutamatergic neurons to their long-range projection specificity.
Abstract: The neocortex contains a multitude of cell types that are segregated into layers and functionally distinct areas. To investigate the diversity of cell types across the mouse neocortex, here we analysed 23,822 cells from two areas at distant poles of the mouse neocortex: the primary visual cortex and the anterior lateral motor cortex. We define 133 transcriptomic cell types by deep, single-cell RNA sequencing. Nearly all types of GABA (γ-aminobutyric acid)-containing neurons are shared across both areas, whereas most types of glutamatergic neurons were found in one of the two areas. By combining single-cell RNA sequencing and retrograde labelling, we match transcriptomic types of glutamatergic neurons to their long-range projection specificity. Our study establishes a combined transcriptomic and projectional taxonomy of cortical cell types from functionally distinct areas of the adult mouse cortex.


Journal ArticleDOI
25 May 2018-Science
TL;DR: Research prospects for more sustainable routes to nitrogen commodity chemicals are reviewed, considering developments in enzymatic, homogeneous, and heterogeneous catalysis, as well as electrochemical, photochemical, and plasma-based approaches.
Abstract: BACKGROUND The invention of the Haber-Bosch (H-B) process in the early 1900s to produce ammonia industrially from nitrogen and hydrogen revolutionized the manufacture of fertilizer and led to fundamental changes in the way food is produced. Its impact is underscored by the fact that about 50% of the nitrogen atoms in humans today originate from this single industrial process. In the century after the H-B process was invented, the chemistry of carbon moved to center stage, resulting in remarkable discoveries and a vast array of products including plastics and pharmaceuticals. In contrast, little has changed in industrial nitrogen chemistry. This scenario reflects both the inherent efficiency of the H-B process and the particular challenge of breaking the strong dinitrogen bond. Nonetheless, the reliance of the H-B process on fossil fuels and its associated high CO 2 emissions have spurred recent interest in finding more sustainable and environmentally benign alternatives. Nitrogen in its more oxidized forms is also industrially, biologically, and environmentally important, and synergies in new combinations of oxidative and reductive transformations across the nitrogen cycle could lead to improved efficiencies. ADVANCES Major effort has been devoted to developing alternative and environmentally friendly processes that would allow NH 3 production at distributed sources under more benign conditions, rather than through the large-scale centralized H-B process. Hydrocarbons (particularly methane) and water are the only two sources of hydrogen atoms that can sustain long-term, large-scale NH 3 production. The use of water as the hydrogen source for NH 3 production requires substantially more energy than using methane, but it is also more environmentally benign, does not contribute to the accumulation of greenhouse gases, and does not compete for valuable and limited hydrocarbon resources. Microbes living in all major ecosystems are able to reduce N 2 to NH 3 by using the enzyme nitrogenase. A deeper understanding of this enzyme could lead to more efficient catalysts for nitrogen reduction under ambient conditions. Model molecular catalysts have been designed that mimic some of the functions of the active site of nitrogenase. Some modest success has also been achieved in designing electrocatalysts for dinitrogen reduction. Electrochemistry avoids the expense and environmental damage of steam reforming of methane (which accounts for most of the cost of the H-B process), and it may provide a means for distributed production of ammonia. On the oxidative side, nitric acid is the principal commodity chemical containing oxidized nitrogen. Nearly all nitric acid is manufactured by oxidation of NH 3 through the Ostwald process, but a more direct reaction of N 2 with O 2 might be practically feasible through further development of nonthermal plasma technology. Heterogeneous NH 3 oxidation with O 2 is at the heart of the Ostwald process and is practiced in a variety of environmental protection applications as well. Precious metals remain the workhorse catalysts, and opportunities therefore exist to develop lower-cost materials with equivalent or better activity and selectivity. Nitrogen oxides are also environmentally hazardous pollutants generated by industrial and transportation activities, and extensive research has gone into developing and applying reduction catalysts. Three-way catalytic converters are operating on hundreds of millions of vehicles worldwide. However, increasingly stringent emissions regulations, coupled with the low exhaust temperatures of high-efficiency engines, present challenges for future combustion emissions control. Bacterial denitrification is the natural analog of this chemistry and another source of study and inspiration for catalyst design. OUTLOOK Demands for greater energy efficiency, smaller-scale and more flexible processes, and environmental protection provide growing impetus for expanding the scope of nitrogen chemistry. Nitrogenase, as well as nitrifying and denitrifying enzymes, will eventually be understood in sufficient detail that robust molecular catalytic mimics will emerge. Electrochemical and photochemical methods also demand more study. Other intriguing areas of research that have provided tantalizing results include chemical looping and plasma-driven processes. The grand challenge in the field of nitrogen chemistry is the development of catalysts and processes that provide simple, low-energy routes to the manipulation of the redox states of nitrogen.

Journal ArticleDOI
TL;DR: Tao et al. as discussed by the authors discuss the development of the key components for achieving high-performance evaporation, including solar absorbers and structures, thermal insulators and thermal concentrators.
Abstract: As a ubiquitous solar-thermal energy conversion process, solar-driven evaporation has attracted tremendous research attention owing to its high conversion efficiency of solar energy and transformative industrial potential. In recent years, solar-driven interfacial evaporation by localization of solar-thermal energy conversion to the air/liquid interface has been proposed as a promising alternative to conventional bulk heating-based evaporation, potentially reducing thermal losses and improving energy conversion efficiency. In this Review, we discuss the development of the key components for achieving high-performance evaporation, including solar absorbers, evaporation structures, thermal insulators and thermal concentrators, and discuss how they improve the performance of the solar-driven interfacial evaporation system. We describe the possibilities for applying this efficient solar-driven interfacial evaporation process for energy conversion applications. The exciting opportunities and challenges in both fundamental research and practical implementation of the solar-driven interfacial evaporation process are also discussed. The thermal properties of solar energy can be exploited for many applications, including evaporation. Tao et al. review recent developments in the field of solar-driven interfacial evaporation, which have enabled higher-performance structures by localizing energy conversion to the air/liquid interface.

Journal ArticleDOI
26 Jul 2018-Cell
TL;DR: MAGIC as mentioned in this paper is a Markov affinity-based graph imputation of cells that shares information across similar cells, via data diffusion, to denoise the cell count matrix and fill in missing transcripts.

Journal ArticleDOI
TL;DR: In this article, an updated physical model to simulate the formation and evolution of galaxies in cosmological, large-scale gravity+magnetohydrodynamical simulations with the moving mesh code AREPO is introduced.
Abstract: We introduce an updated physical model to simulate the formation and evolution of galaxies in cosmological, large-scale gravity+magnetohydrodynamical simulations with the moving mesh code AREPO. The overall framework builds upon the successes of the Illustris galaxy formation model, and includes prescriptions for star formation, stellar evolution, chemical enrichment, primordial and metal-line cooling of the gas, stellar feedback with galactic outflows, and black hole formation, growth and multi-mode feedback. In this paper we give a comprehensive description of the physical and numerical advances which form the core of the IllustrisTNG (The Next Generation) framework. We focus on the revised implementation of the galactic winds, of which we modify the directionality, velocity, thermal content, and energy scalings, and explore its effects on the galaxy population. As described in earlier works, the model also includes a new black hole driven kinetic feedback at low accretion rates, magnetohydrodynamics, and improvements to the numerical scheme. Using a suite of (25 Mpc $h^{-1}$)$^3$ cosmological boxes we assess the outcome of the new model at our fiducial resolution. The presence of a self-consistently amplified magnetic field is shown to have an important impact on the stellar content of $10^{12} M_{\rm sun}$ haloes and above. Finally, we demonstrate that the new galactic winds promise to solve key problems identified in Illustris in matching observational constraints and affecting the stellar content and sizes of the low mass end of the galaxy population.

Book ChapterDOI
08 Sep 2018
TL;DR: This paper proposes AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality and achieves state-of-the-art model compression results in a fully automated way without any human efforts.
Abstract: Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4\(\times \) FLOPs reduction, we achieved 2.7% better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53\(\times \) on the GPU (Titan Xp) and 1.95\(\times \) on an Android phone (Google Pixel 1), with negligible loss of accuracy.

Journal ArticleDOI
TL;DR: In this article, a deep learning model based on Gated Recurrent Unit (GRU) is proposed to exploit the missing values and their missing patterns for effective imputation and improving prediction performance.
Abstract: Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

Journal ArticleDOI
TL;DR: A comprehensive genetic analysis of 304 primary DLBCLs identified low-frequency alterations, captured recurrent mutations, somatic copy number alterations, and structural variants, and defined coordinate signatures in patients with available outcome data to provide a roadmap for an actionableDLBCL classification.
Abstract: Diffuse large B cell lymphoma (DLBCL), the most common lymphoid malignancy in adults, is a clinically and genetically heterogeneous disease that is further classified into transcriptionally defined activated B cell (ABC) and germinal center B cell (GCB) subtypes. We carried out a comprehensive genetic analysis of 304 primary DLBCLs and identified low-frequency alterations, captured recurrent mutations, somatic copy number alterations, and structural variants, and defined coordinate signatures in patients with available outcome data. We integrated these genetic drivers using consensus clustering and identified five robust DLBCL subsets, including a previously unrecognized group of low-risk ABC-DLBCLs of extrafollicular/marginal zone origin; two distinct subsets of GCB-DLBCLs with different outcomes and targetable alterations; and an ABC/GCB-independent group with biallelic inactivation of TP53, CDKN2A loss, and associated genomic instability. The genetic features of the newly characterized subsets, their mutational signatures, and the temporal ordering of identified alterations provide new insights into DLBCL pathogenesis. The coordinate genetic signatures also predict outcome independent of the clinical International Prognostic Index and suggest new combination treatment strategies. More broadly, our results provide a roadmap for an actionable DLBCL classification.

Book ChapterDOI
01 Dec 2018
TL;DR: Preliminary performance data on a subset of TPC-H is presented and it is shown that the system the team is building, C-Store, is substantially faster than popular commercial products.
Abstract: This paper presents the design of a read-optimized relational DBMS that contrasts sharply with most current systems, which are write-optimized. Among the many differences in its design are: storage of data by column rather than by row, careful coding and packing of objects into storage including main memory during query processing, storing an overlapping collection of column-oriented projections, rather than the current fare of tables and indexes, a non-traditional implementation of transactions which includes high availability and snapshot isolation for read-only transactions, and the extensive use of bitmap indexes to complement B-tree structures.We present preliminary performance data on a subset of TPC-H and show that the system we are building, C-Store, is substantially faster than popular commercial products. Hence, the architecture looks very encouraging.