scispace - formally typeset
Search or ask a question

Showing papers by "University of Illinois at Urbana–Champaign published in 2018"


BookDOI
08 Mar 2018
TL;DR: In this article, the authors describe how phase transitions occur in practice in practice, and describe the role of models in the process of phase transitions in the Ising Model and the Role of Models in Phase Transition.
Abstract: Introduction * Scaling and Dimensional Analysis * Power Laws in Statistical Physics * Some Important Questions * Historical Development * Exercises How Phase Transitions Occur In Principle * Review of Statistical Mechanics * The Thermodynamic Limit * Phase Boundaries and Phase Transitions * The Role of Models * The Ising Model * Analytic Properties of the Ising Model * Symmetry Properties of the Ising Model * Existence of Phase Transitions * Spontaneous Symmetry Breaking * Ergodicity Breaking * Fluids * Lattice Gases * Equivalence in Statistical Mechanics * Miscellaneous Remarks * Exercises How Phase Transitions Occur In Practice * Ad Hoc Solution Methods * The Transfer Matrix * Phase Transitions * Thermodynamic Properties * Spatial Correlations * Low Temperature Expansion * Mean Field Theory * Exercises Critical Phenomena in Fluids * Thermodynamics * Two-Phase Coexistence * Vicinity of the Critical Point * Van der Waals Equation * Spatial Correlations * Measurement of Critical Exponents * Exercises Landau Theory * Order Parameters * Common Features of Mean Field Theories * Phenomenological Landau Theory * Continuous Phase Transitions * Inhomogeneous Systems * Correlation Functions * Exercises Fluctuations and the Breakdown of Landau Theory * Breakdown of Microscopic Landau Theory * Breakdown of Phenomenological Landau Theory * The Gaussian Approximation * Critical Exponents * Exercises Scaling in Static, Dynamic and Non-Equilibrium Phenomena * The Static-Scaling Hypothesis * Other Forms of the Scaling Hypothesis * Dynamic Critical Phenomena * Scaling in the Approach to Equilibrium * Summary The Renormalisation Group * Block Spins * Basic Ideas of the Renormalisation Group * Fixed Points * Origin of Scaling * RG in Differential Form * RG for the Two Dimensional Ising Model * First Order Transitions and Non-Critical Properties * RG for the Correlation Function * Crossover Phenomena * Correlations to Scaling * Finite Size Scaling Anomalous Dimensions Far From Equilibrium * Introduction * Similarity Solutions * Anomalous Dimensions in Similarity Solutions * Renormalisation * Perturbation Theory for Barenblatts Equation * Fixed Points * Conclusion Continuous Symmetry * Correlation in the Ordered Phase * Kosterlitz-Thouless Transition Critical Phenomena Near Four Dimensions * Basic Idea of the Epsilon Expansion * RG for the Gaussian Model * RG Beyond the Gaussian Approximation * Feyman Diagrams * The RG Recursion Relations * Conclusion

2,245 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities, which performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques.
Abstract: When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.

1,864 citations


Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1235 moreInstitutions (132)
TL;DR: This analysis expands upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars.
Abstract: On 17 August 2017, the LIGO and Virgo observatories made the first direct detection of gravitational waves from the coalescence of a neutron star binary system. The detection of this gravitational-wave signal, GW170817, offers a novel opportunity to directly probe the properties of matter at the extreme conditions found in the interior of these stars. The initial, minimal-assumption analysis of the LIGO and Virgo data placed constraints on the tidal effects of the coalescing bodies, which were then translated to constraints on neutron star radii. Here, we expand upon previous analyses by working under the hypothesis that both bodies were neutron stars that are described by the same equation of state and have spins within the range observed in Galactic binary neutron stars. Our analysis employs two methods: the use of equation-of-state-insensitive relations between various macroscopic properties of the neutron stars and the use of an efficient parametrization of the defining function pðρÞ of the equation of state itself. From the LIGO and Virgo data alone and the first method, we measure the two neutron star radii as R1 ¼ 10.8 þ2.0 −1.7 km for the heavier star and R2 ¼ 10.7 þ2.1 −1.5 km for the lighter star at the 90% credible level. If we additionally require that the equation of state supports neutron stars with masses larger than 1.97 M⊙ as required from electromagnetic observations and employ the equation-of-state parametrization, we further constrain R1 ¼ 11.9 þ1.4 −1.4 km and R2 ¼ 11.9 þ1.4 −1.4 km at the 90% credible level. Finally, we obtain constraints on pðρÞ at supranuclear densities, with pressure at twice nuclear saturation density measured at 3.5 þ2.7 −1.7 × 1034 dyn cm−2 at the 90% level.

1,595 citations


Journal ArticleDOI
TL;DR: A comprehensive review of the literature in graph embedding can be found in this paper, where the authors introduce the formal definition of graph embeddings as well as the related concepts.
Abstract: Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximumly preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work addresses these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques, and application scenarios.

1,502 citations


Journal ArticleDOI
Corinne Le Quéré1, Robbie M. Andrew, Pierre Friedlingstein2, Stephen Sitch2, Judith Hauck3, Julia Pongratz4, Julia Pongratz5, Penelope A. Pickers1, Jan Ivar Korsbakken, Glen P. Peters, Josep G. Canadell6, Almut Arneth7, Vivek K. Arora, Leticia Barbero8, Leticia Barbero9, Ana Bastos4, Laurent Bopp10, Frédéric Chevallier11, Louise Chini12, Philippe Ciais11, Scott C. Doney13, Thanos Gkritzalis14, Daniel S. Goll11, Ian Harris1, Vanessa Haverd6, Forrest M. Hoffman15, Mario Hoppema3, Richard A. Houghton16, George C. Hurtt12, Tatiana Ilyina5, Atul K. Jain17, Truls Johannessen18, Chris D. Jones19, Etsushi Kato, Ralph F. Keeling20, Kees Klein Goldewijk21, Kees Klein Goldewijk22, Peter Landschützer5, Nathalie Lefèvre23, Sebastian Lienert24, Zhu Liu25, Zhu Liu1, Danica Lombardozzi26, Nicolas Metzl23, David R. Munro27, Julia E. M. S. Nabel5, Shin-Ichiro Nakaoka28, Craig Neill29, Craig Neill30, Are Olsen18, T. Ono, Prabir K. Patra31, Anna Peregon11, Wouter Peters32, Wouter Peters33, Philippe Peylin11, Benjamin Pfeil34, Benjamin Pfeil18, Denis Pierrot8, Denis Pierrot9, Benjamin Poulter35, Gregor Rehder36, Laure Resplandy37, Eddy Robertson19, Matthias Rocher11, Christian Rödenbeck5, Ute Schuster2, Jörg Schwinger34, Roland Séférian11, Ingunn Skjelvan34, Tobias Steinhoff38, Adrienne J. Sutton39, Pieter P. Tans39, Hanqin Tian40, Bronte Tilbrook30, Bronte Tilbrook29, Francesco N. Tubiello41, Ingrid T. van der Laan-Luijkx32, Guido R. van der Werf42, Nicolas Viovy11, Anthony P. Walker15, Andy Wiltshire19, Rebecca Wright1, Sönke Zaehle5, Bo Zheng11 
University of East Anglia1, University of Exeter2, Alfred Wegener Institute for Polar and Marine Research3, Ludwig Maximilian University of Munich4, Max Planck Society5, Commonwealth Scientific and Industrial Research Organisation6, Karlsruhe Institute of Technology7, Cooperative Institute for Marine and Atmospheric Studies8, Atlantic Oceanographic and Meteorological Laboratory9, École Normale Supérieure10, Centre national de la recherche scientifique11, University of Maryland, College Park12, University of Virginia13, Flanders Marine Institute14, Oak Ridge National Laboratory15, Woods Hole Research Center16, University of Illinois at Urbana–Champaign17, Geophysical Institute, University of Bergen18, Met Office19, University of California, San Diego20, Netherlands Environmental Assessment Agency21, Utrecht University22, University of Paris23, Oeschger Centre for Climate Change Research24, Tsinghua University25, National Center for Atmospheric Research26, Institute of Arctic and Alpine Research27, National Institute for Environmental Studies28, Cooperative Research Centre29, Hobart Corporation30, Japan Agency for Marine-Earth Science and Technology31, Wageningen University and Research Centre32, University of Groningen33, Bjerknes Centre for Climate Research34, Goddard Space Flight Center35, Leibniz Institute for Baltic Sea Research36, Princeton University37, Leibniz Institute of Marine Sciences38, National Oceanic and Atmospheric Administration39, Auburn University40, Food and Agriculture Organization41, VU University Amsterdam42
TL;DR: In this article, the authors describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties, including emissions from land use and land-use change data and bookkeeping models.
Abstract: . Accurate assessment of anthropogenic carbon dioxide ( CO2 ) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere – the “global carbon budget” – is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and methodology to quantify the five major components of the global carbon budget and their uncertainties. Fossil CO2 emissions ( EFF ) are based on energy statistics and cement production data, while emissions from land use and land-use change ( ELUC ), mainly deforestation, are based on land use and land-use change data and bookkeeping models. Atmospheric CO2 concentration is measured directly and its growth rate ( GATM ) is computed from the annual changes in concentration. The ocean CO2 sink ( SOCEAN ) and terrestrial CO2 sink ( SLAND ) are estimated with global process models constrained by observations. The resulting carbon budget imbalance ( BIM ), the difference between the estimated total emissions and the estimated changes in the atmosphere, ocean, and terrestrial biosphere, is a measure of imperfect data and understanding of the contemporary carbon cycle. All uncertainties are reported as ±1σ . For the last decade available (2008–2017), EFF was 9.4±0.5 GtC yr −1 , ELUC 1.5±0.7 GtC yr −1 , GATM 4.7±0.02 GtC yr −1 , SOCEAN 2.4±0.5 GtC yr −1 , and SLAND 3.2±0.8 GtC yr −1 , with a budget imbalance BIM of 0.5 GtC yr −1 indicating overestimated emissions and/or underestimated sinks. For the year 2017 alone, the growth in EFF was about 1.6 % and emissions increased to 9.9±0.5 GtC yr −1 . Also for 2017, ELUC was 1.4±0.7 GtC yr −1 , GATM was 4.6±0.2 GtC yr −1 , SOCEAN was 2.5±0.5 GtC yr −1 , and SLAND was 3.8±0.8 GtC yr −1 , with a BIM of 0.3 GtC. The global atmospheric CO2 concentration reached 405.0±0.1 ppm averaged over 2017. For 2018, preliminary data for the first 6–9 months indicate a renewed growth in EFF of + 2.7 % (range of 1.8 % to 3.7 %) based on national emission projections for China, the US, the EU, and India and projections of gross domestic product corrected for recent changes in the carbon intensity of the economy for the rest of the world. The analysis presented here shows that the mean and trend in the five components of the global carbon budget are consistently estimated over the period of 1959–2017, but discrepancies of up to 1 GtC yr −1 persist for the representation of semi-decadal variability in CO2 fluxes. A detailed comparison among individual estimates and the introduction of a broad range of observations show (1) no consensus in the mean and trend in land-use change emissions, (2) a persistent low agreement among the different methods on the magnitude of the land CO2 flux in the northern extra-tropics, and (3) an apparent underestimation of the CO2 variability by ocean models, originating outside the tropics. This living data update documents changes in the methods and data sets used in this new global carbon budget and the progress in understanding the global carbon cycle compared with previous publications of this data set (Le Quere et al., 2018, 2016, 2015a, b, 2014, 2013). All results presented here can be downloaded from https://doi.org/10.18160/GCP-2018 .

1,458 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: Yu et al. as discussed by the authors proposed a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.
Abstract: Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https://github.com/JiahuiYu/generative_inpainting.

1,397 citations


Proceedings ArticleDOI
12 Mar 2018
TL;DR: DUC is designed to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling, and a hybrid dilated convolution (HDC) framework in the encoding phase is proposed.
Abstract: Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the "gridding issue"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1% mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https://github.com/TuSimple/TuSimple-DUC.

1,358 citations


Posted Content
TL;DR: In this article, a new deep generative model-based approach is proposed which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.
Abstract: Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: this https URL.

1,333 citations


Journal ArticleDOI
TL;DR: A deep learning algorithm similar in spirit to Galerkin methods, using a deep neural network instead of linear combinations of basis functions is proposed, and is implemented for American options in up to 100 dimensions.

1,290 citations


Journal ArticleDOI
TL;DR: Two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences are presented, including the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum information about a Metagenome-Assembled Genomes (MIMAG), including estimates of genome completeness and contamination.
Abstract: We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.

1,171 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: CSRNet as discussed by the authors is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations.
Abstract: We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations. CSRNet is an easy-trained model because of its pure convolutional structure. We demonstrate CSRNet on four datasets (ShanghaiTech dataset, the UCF_CC_50 dataset, the WorldEXPO'10 dataset, and the UCSD dataset) and we deliver the state-of-the-art performance. In the ShanghaiTech Part_B dataset, CSRNet achieves 47.3% lower Mean Absolute Error (MAE) than the previous state-of-the-art method. We extend the targeted applications for counting other objects, such as the vehicle in TRANCOS dataset. Results show that CSRNet significantly improves the output quality with 15.4% lower MAE than the previous state-of-the-art approach.

Journal ArticleDOI
TL;DR: A review of the studies that developed data-driven building energy consumption prediction models, with a particular focus on reviewing the scopes of prediction, the data properties and the data preprocessing methods used, the machine learning algorithms utilized for prediction, and the performance measures used for evaluation is provided in this paper.
Abstract: Energy is the lifeblood of modern societies. In the past decades, the world's energy consumption and associated CO 2 emissions increased rapidly due to the increases in population and comfort demands of people. Building energy consumption prediction is essential for energy planning, management, and conservation. Data-driven models provide a practical approach to energy consumption prediction. This paper offers a review of the studies that developed data-driven building energy consumption prediction models, with a particular focus on reviewing the scopes of prediction, the data properties and the data preprocessing methods used, the machine learning algorithms utilized for prediction, and the performance measures used for evaluation. Based on this review, existing research gaps are identified and future research directions in the area of data-driven building energy consumption prediction are highlighted.

Journal ArticleDOI
TL;DR: It is shown that AF4 can serve as an improved analytical tool for isolating extracellular vesicles and addressing the complexities of heterogeneous nanoparticle subpopulations, and three nanoparticle subsets demonstrated diverse organ biodistribution patterns, suggesting distinct biological functions.
Abstract: The heterogeneity of exosomal populations has hindered our understanding of their biogenesis, molecular composition, biodistribution and functions. By employing asymmetric flow field-flow fractionation (AF4), we identified two exosome subpopulations (large exosome vesicles, Exo-L, 90–120 nm; small exosome vesicles, Exo-S, 60–80 nm) and discovered an abundant population of non-membranous nanoparticles termed ‘exomeres’ (~35 nm). Exomere proteomic profiling revealed an enrichment in metabolic enzymes and hypoxia, microtubule and coagulation proteins as well as specific pathways, such as glycolysis and mTOR signalling. Exo-S and Exo-L contained proteins involved in endosomal function and secretion pathways, and mitotic spindle and IL-2/STAT5 signalling pathways, respectively. Exo-S, Exo-L and exomeres each had unique N-glycosylation, protein, lipid, DNA and RNA profiles and biophysical properties. These three nanoparticle subsets demonstrated diverse organ biodistribution patterns, suggesting distinct biological functions. This study demonstrates that AF4 can serve as an improved analytical tool for isolating extracellular vesicles and addressing the complexities of heterogeneous nanoparticle subpopulations.

Journal ArticleDOI
Bela Abolfathi1, D. S. Aguado2, Gabriela Aguilar3, Carlos Allende Prieto2  +361 moreInstitutions (94)
TL;DR: SDSS-IV is the fourth generation of the Sloan Digital Sky Survey and has been in operation since 2014 July. as discussed by the authors describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14).
Abstract: The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since 2014 July. This paper describes the second data release from this phase, and the 14th from SDSS overall (making this Data Release Fourteen or DR14). This release makes the data taken by SDSS-IV in its first two years of operation (2014-2016 July) public. Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey; the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data-driven machine-learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from the SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS web site (www.sdss.org) has been updated for this release and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020 and will be followed by SDSS-V.

Journal ArticleDOI
Ali H. Mokdad1, Katherine Ballestros1, Michelle Echko1, Scott D Glenn1, Helen E Olsen1, Erin C Mullany1, Alexander Lee1, Abdur Rahman Khan2, Alireza Ahmadi3, Alireza Ahmadi4, Alize J. Ferrari5, Alize J. Ferrari1, Alize J. Ferrari6, Amir Kasaeian7, Andrea Werdecker, Austin Carter1, Ben Zipkin1, Benn Sartorius8, Benn Sartorius9, Berrin Serdar10, Bryan L. Sykes11, Christopher Troeger1, Christina Fitzmaurice12, Christina Fitzmaurice1, Colin D. Rehm13, Damian Santomauro6, Damian Santomauro1, Damian Santomauro5, Daniel Kim14, Danny V. Colombara1, David C. Schwebel15, Derrick Tsoi1, Dhaval Kolte16, Elaine O. Nsoesie1, Emma Nichols1, Eyal Oren17, Fiona J Charlson1, Fiona J Charlson5, Fiona J Charlson6, George C Patton18, Gregory A. Roth1, H. Dean Hosgood19, Harvey Whiteford1, Harvey Whiteford5, Harvey Whiteford6, Hmwe H Kyu1, Holly E. Erskine6, Holly E. Erskine5, Holly E. Erskine1, Hsiang Huang20, Ira Martopullo1, Jasvinder A. Singh15, Jean B. Nachega21, Jean B. Nachega22, Jean B. Nachega23, Juan Sanabria24, Juan Sanabria25, Kaja Abbas26, Kanyin Ong1, Karen M. Tabb27, Kristopher J. Krohn1, Leslie Cornaby1, Louisa Degenhardt1, Louisa Degenhardt28, Mark Moses1, Maryam S. Farvid29, Max Griswold1, Michael H. Criqui30, Michelle L. Bell31, Minh Nguyen1, Mitch T Wallin32, Mitch T Wallin33, Mojde Mirarefin1, Mostafa Qorbani, Mustafa Z. Younis34, Nancy Fullman1, Patrick Liu1, Paul S Briant1, Philimon Gona35, Rasmus Havmoller3, Ricky Leung36, Ruth W Kimokoti37, Shahrzad Bazargan-Hejazi38, Shahrzad Bazargan-Hejazi39, Simon I. Hay1, Simon I. Hay40, Simon Yadgir1, Stan Biryukov1, Stein Emil Vollset1, Stein Emil Vollset41, Tahiya Alam1, Tahvi Frank1, Talha Farid2, Ted R. Miller42, Ted R. Miller43, Theo Vos1, Till Bärnighausen44, Till Bärnighausen29, Tsegaye Telwelde Gebrehiwot45, Yuichiro Yano46, Ziyad Al-Aly47, Alem Mehari48, Alexis J. Handal49, Amit Kandel50, Ben Anderson51, Brian J. Biroscak52, Brian J. Biroscak31, Dariush Mozaffarian53, E. Ray Dorsey54, Eric L. Ding29, Eun-Kee Park55, Gregory R. Wagner29, Guoqing Hu56, Honglei Chen57, Jacob E. Sunshine51, Jagdish Khubchandani58, Janet L Leasher59, Janni Leung51, Janni Leung5, Joshua A. Salomon29, Jürgen Unützer51, Leah E. Cahill29, Leah E. Cahill60, Leslie T. Cooper61, Masako Horino, Michael Brauer1, Michael Brauer62, Nicholas J K Breitborde63, Peter J. Hotez64, Roman Topor-Madry65, Roman Topor-Madry66, Samir Soneji67, Saverio Stranges68, Spencer L. James1, Stephen M. Amrock69, Sudha Jayaraman70, Tejas V. Patel, Tomi Akinyemiju15, Vegard Skirbekk41, Vegard Skirbekk71, Yohannes Kinfu72, Zulfiqar A Bhutta73, Jost B. Jonas44, Christopher J L Murray1 
Institute for Health Metrics and Evaluation1, University of Louisville2, Karolinska Institutet3, Kermanshah University of Medical Sciences4, University of Queensland5, Centre for Mental Health6, Tehran University of Medical Sciences7, University of KwaZulu-Natal8, South African Medical Research Council9, University of Colorado Boulder10, University of California, Irvine11, Fred Hutchinson Cancer Research Center12, Montefiore Medical Center13, Northeastern University14, University of Alabama at Birmingham15, Brown University16, San Diego State University17, University of Melbourne18, Albert Einstein College of Medicine19, Cambridge Health Alliance20, Johns Hopkins University21, University of Pittsburgh22, University of Cape Town23, Case Western Reserve University24, Marshall University25, University of London26, University of Illinois at Urbana–Champaign27, National Drug and Alcohol Research Centre28, Harvard University29, University of California, San Diego30, Yale University31, Veterans Health Administration32, Georgetown University33, Jackson State University34, University of Massachusetts Boston35, State University of New York System36, Simmons College37, University of California, Los Angeles38, Charles R. Drew University of Medicine and Science39, University of Oxford40, Norwegian Institute of Public Health41, Curtin University42, Pacific Institute43, Heidelberg University44, Jimma University45, Northwestern University46, Washington University in St. Louis47, Howard University48, University of New Mexico49, University at Buffalo50, University of Washington51, University of South Florida52, Tufts University53, University of Rochester Medical Center54, Kosin University55, Central South University56, Michigan State University57, Ball State University58, Nova Southeastern University59, Dalhousie University60, Mayo Clinic61, University of British Columbia62, Ohio State University63, Baylor University64, Jagiellonian University Medical College65, Wrocław Medical University66, Dartmouth College67, University of Western Ontario68, Oregon Health & Science University69, Virginia Commonwealth University70, Columbia University71, University of Canberra72, Aga Khan University73
10 Apr 2018-JAMA
TL;DR: There are wide differences in the burden of disease at the state level and specific diseases and risk factors, such as drug use disorders, high BMI, poor diet, high fasting plasma glucose level, and alcohol use disorders are increasing and warrant increased attention.
Abstract: Introduction Several studies have measured health outcomes in the United States, but none have provided a comprehensive assessment of patterns of health by state. Objective To use the results of the Global Burden of Disease Study (GBD) to report trends in the burden of diseases, injuries, and risk factors at the state level from 1990 to 2016. Design and Setting A systematic analysis of published studies and available data sources estimates the burden of disease by age, sex, geography, and year. Main Outcomes and Measures Prevalence, incidence, mortality, life expectancy, healthy life expectancy (HALE), years of life lost (YLLs) due to premature mortality, years lived with disability (YLDs), and disability-adjusted life-years (DALYs) for 333 causes and 84 risk factors with 95% uncertainty intervals (UIs) were computed. Results Between 1990 and 2016, overall death rates in the United States declined from 745.2 (95% UI, 740.6 to 749.8) per 100 000 persons to 578.0 (95% UI, 569.4 to 587.1) per 100 000 persons. The probability of death among adults aged 20 to 55 years declined in 31 states and Washington, DC from 1990 to 2016. In 2016, Hawaii had the highest life expectancy at birth (81.3 years) and Mississippi had the lowest (74.7 years), a 6.6-year difference. Minnesota had the highest HALE at birth (70.3 years), and West Virginia had the lowest (63.8 years), a 6.5-year difference. The leading causes of DALYs in the United States for 1990 and 2016 were ischemic heart disease and lung cancer, while the third leading cause in 1990 was low back pain, and the third leading cause in 2016 was chronic obstructive pulmonary disease. Opioid use disorders moved from the 11th leading cause of DALYs in 1990 to the 7th leading cause in 2016, representing a 74.5% (95% UI, 42.8% to 93.9%) change. In 2016, each of the following 6 risks individually accounted for more than 5% of risk-attributable DALYs: tobacco consumption, high body mass index (BMI), poor diet, alcohol and drug use, high fasting plasma glucose, and high blood pressure. Across all US states, the top risk factors in terms of attributable DALYs were due to 1 of the 3 following causes: tobacco consumption (32 states), high BMI (10 states), or alcohol and drug use (8 states). Conclusions and Relevance There are wide differences in the burden of disease at the state level. Specific diseases and risk factors, such as drug use disorders, high BMI, poor diet, high fasting plasma glucose level, and alcohol use disorders are increasing and warrant increased attention. These data can be used to inform national health priorities for research, clinical care, and policy.

Proceedings Article
12 Feb 2018
TL;DR: The experiments show that the best discovered activation function, f(x) = x \cdot \text{sigmoid}(\beta x)$, which is named Swish, tends to work better than ReLU on deeper models across a number of challenging datasets.
Abstract: The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various hand-designed alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose to leverage automatic search techniques to discover new activation functions. Using a combination of exhaustive and reinforcement learning-based search, we discover multiple novel activation functions. We verify the effectiveness of the searches by conducting an empirical evaluation with the best discovered activation function. Our experiments show that the best discovered activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, which we name Swish, tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9\% for Mobile NASNet-A and 0.6\% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network.

Journal ArticleDOI
TL;DR: Deep learning has emerged as a powerful machine learning technique that learns multiple layers of representations or features of the data and produces state-of-the-art prediction results as mentioned in this paper, which is also popularly used in sentiment analysis in recent years.
Abstract: Deep learning has emerged as a powerful machine learning technique that learns multiple layers of representations or features of the data and produces state-of-the-art prediction results. Along with the success of deep learning in many other application domains, deep learning is also popularly used in sentiment analysis in recent years. This paper first gives an overview of deep learning and then provides a comprehensive survey of its current applications in sentiment analysis.

Posted Content
TL;DR: The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers.
Abstract: We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces. Code, demo and models are available at: this https URL

Journal ArticleDOI
TL;DR: This Review presents the main principles of operation and representative basic and clinical science applications of quantitative phase imaging, and aims to provide a critical and objective overview of this dynamic research field.
Abstract: Quantitative phase imaging (QPI) has emerged as a valuable method for investigating cells and tissues. QPI operates on unlabelled specimens and, as such, is complementary to established fluorescence microscopy, exhibiting lower phototoxicity and no photobleaching. As the images represent quantitative maps of optical path length delays introduced by the specimen, QPI provides an objective measure of morphology and dynamics, free of variability due to contrast agents. Owing to the tremendous progress witnessed especially in the past 10–15 years, a number of technologies have become sufficiently reliable and translated to biomedical laboratories. Commercialization efforts are under way and, as a result, the QPI field is now transitioning from a technology-development-driven to an application-focused field. In this Review, we aim to provide a critical and objective overview of this dynamic research field by presenting the scientific context, main principles of operation and current biomedical applications. Over the past 10–15 years, quantitative phase imaging has moved from a research-driven to an application-focused field. This Review presents the main principles of operation and representative basic and clinical science applications.

Posted Content
TL;DR: This work proposes a Criss-Cross Network (CCNet) for obtaining contextual information in a more effective and efficient way and achieves the mIoU score of 81.4 and 45.22 on Cityscapes test set and ADE20K validation set, respectively, which are the new state-of-the-art results.
Abstract: Contextual information is vital in visual understanding problems, such as semantic segmentation and object detection. We propose a Criss-Cross Network (CCNet) for obtaining full-image contextual information in a very effective and efficient way. Concretely, for each pixel, a novel criss-cross attention module harvests the contextual information of all the pixels on its criss-cross path. By taking a further recurrent operation, each pixel can finally capture the full-image dependencies. Besides, a category consistent loss is proposed to enforce the criss-cross attention module to produce more discriminative features. Overall, CCNet is with the following merits: 1) GPU memory friendly. Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage. 2) High computational efficiency. The recurrent criss-cross attention significantly reduces FLOPs by about 85% of the non-local block. 3) The state-of-the-art performance. We conduct extensive experiments on semantic segmentation benchmarks including Cityscapes, ADE20K, human parsing benchmark LIP, instance segmentation benchmark COCO, video segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU scores of 81.9%, 45.76% and 55.47% on the Cityscapes test set, the ADE20K validation set and the LIP validation set respectively, which are the new state-of-the-art results. The source codes are available at \url{this https URL}.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, M. R. Abernathy3  +1135 moreInstitutions (139)
TL;DR: In this article, the authors present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves.
Abstract: We present possible observing scenarios for the Advanced LIGO, Advanced Virgo and KAGRA gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We estimate the sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron star systems, which are the most promising targets for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and 90% credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5– 20 deg2 requires at least three detectors of sensitivity within a factor of ∼2 of each other and with a broad frequency bandwidth. When all detectors, including KAGRA and the third LIGO detector in India, reach design sensitivity, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting is presented, which exploits redundancies in large deep networks to free up parameters that can then be employed to learn new tasks.
Abstract: This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially "pack" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.

Journal ArticleDOI
TL;DR: The UWBG semiconductor materials, such as high Al‐content AlGaN, diamond and Ga2O3, advanced in maturity to the point where realizing some of their tantalizing advantages is a relatively near‐term possibility.
Abstract: J. Y. Tsao,* S. Chowdhury, M. A. Hollis,* D. Jena, N. M. Johnson, K. A. Jones, R. J. Kaplar,* S. Rajan, C. G. Van de Walle, E. Bellotti, C. L. Chua, R. Collazo, M. E. Coltrin, J. A. Cooper, K. R. Evans, S. Graham, T. A. Grotjohn, E. R. Heller, M. Higashiwaki, M. S. Islam, P. W. Juodawlkis, M. A. Khan, A. D. Koehler, J. H. Leach, U. K. Mishra, R. J. Nemanich, R. C. N. Pilawa-Podgurski, J. B. Shealy, Z. Sitar, M. J. Tadjer, A. F. Witulski, M. Wraback, and J. A. Simmons

Journal ArticleDOI
Adam P. Arkin1, Adam P. Arkin2, Robert W. Cottingham3, Christopher S. Henry4, Nomi L. Harris1, Rick Stevens5, Sergei Maslov6, Paramvir S. Dehal1, Doreen Ware7, Fernando Perez, Shane Canon1, Michael W. Sneddon1, Matthew L. Henderson1, William J. Riehl1, Dan Murphy-Olson4, Stephen Y. Chan1, Roy T. Kamimura1, Sunita Kumari7, Meghan M Drake3, Thomas Brettin4, Elizabeth M. Glass4, Dylan Chivian1, Dan Gunter1, David J. Weston3, Benjamin H. Allen3, Jason K. Baumohl1, Aaron A. Best8, Benjamin P. Bowen1, Steven E. Brenner2, Christopher Bun4, John-Marc Chandonia1, Jer Ming Chia7, R. L. Colasanti4, Neal Conrad4, James J. Davis4, Brian H. Davison3, Matthew DeJongh8, Scott Devoid4, Emily M. Dietrich4, Inna Dubchak1, Janaka N. Edirisinghe5, Janaka N. Edirisinghe4, Gang Fang9, José P. Faria4, Paul M. Frybarger4, Wolfgang Gerlach4, Mark Gerstein9, Annette Greiner1, James Gurtowski7, Holly L. Haun3, Fei He6, Rashmi Jain1, Rashmi Jain10, Marcin P. Joachimiak1, Kevin P. Keegan4, Shinnosuke Kondo8, Vivek Kumar7, Miriam Land3, Folker Meyer4, Mark Mills3, Pavel S. Novichkov1, Taeyun Oh1, Taeyun Oh10, Gary J. Olsen11, Robert Olson4, Bruce Parrello4, Shiran Pasternak7, Erik Pearson1, Sarah S. Poon1, Gavin Price1, Srividya Ramakrishnan7, Priya Ranjan3, Priya Ranjan12, Pamela C. Ronald10, Pamela C. Ronald1, Michael C. Schatz7, Samuel M. D. Seaver4, Maulik Shukla4, Roman A. Sutormin1, Mustafa H Syed3, James Thomason7, Nathan L. Tintle8, Daifeng Wang9, Fangfang Xia4, Hyunseung Yoo4, Shinjae Yoo6, Dantong Yu6 
TL;DR: Author(s): Arkin, Adam P; Cottingham, Robert W; Henry, Christopher S; Harris, Nomi L; Stevens, Rick L; Maslov, Sergei; Dehal, Paramvir; Ware, Doreen; Perez, Fernando; Canon, Shane; Sneddon, Michael W; Henderson, Matthew L; Riehl, William J; Murphy-Olson, Dan; Chan, Stephen Y; Kamimura, Roy T.
Abstract: Author(s): Arkin, Adam P; Cottingham, Robert W; Henry, Christopher S; Harris, Nomi L; Stevens, Rick L; Maslov, Sergei; Dehal, Paramvir; Ware, Doreen; Perez, Fernando; Canon, Shane; Sneddon, Michael W; Henderson, Matthew L; Riehl, William J; Murphy-Olson, Dan; Chan, Stephen Y; Kamimura, Roy T; Kumari, Sunita; Drake, Meghan M; Brettin, Thomas S; Glass, Elizabeth M; Chivian, Dylan; Gunter, Dan; Weston, David J; Allen, Benjamin H; Baumohl, Jason; Best, Aaron A; Bowen, Ben; Brenner, Steven E; Bun, Christopher C; Chandonia, John-Marc; Chia, Jer-Ming; Colasanti, Ric; Conrad, Neal; Davis, James J; Davison, Brian H; DeJongh, Matthew; Devoid, Scott; Dietrich, Emily; Dubchak, Inna; Edirisinghe, Janaka N; Fang, Gang; Faria, Jose P; Frybarger, Paul M; Gerlach, Wolfgang; Gerstein, Mark; Greiner, Annette; Gurtowski, James; Haun, Holly L; He, Fei; Jain, Rashmi; Joachimiak, Marcin P; Keegan, Kevin P; Kondo, Shinnosuke; Kumar, Vivek; Land, Miriam L; Meyer, Folker; Mills, Marissa; Novichkov, Pavel S; Oh, Taeyun; Olsen, Gary J; Olson, Robert; Parrello, Bruce; Pasternak, Shiran; Pearson, Erik; Poon, Sarah S; Price, Gavin A; Ramakrishnan, Srividya; Ranjan, Priya; Ronald, Pamela C; Schatz, Michael C; Seaver, Samuel MD; Shukla, Maulik; Sutormin, Roman A; Syed, Mustafa H; Thomason, James; Tintle, Nathan L; Wang, Daifeng; Xia, Fangfang; Yoo, Hyunseung; Yoo, Shinjae; Yu, Dantong

Journal ArticleDOI
14 Mar 2018-Nature
TL;DR: This work demonstrates experimentally a member of this predicted class of materials—a quantized quadrupole topological insulator—produced using a gigahertz-frequency reconfigurable microwave circuit, and provides conclusive evidence of a unique form of robustness against disorder and deformation, which is characteristic of higher-order topologicalinsulators.
Abstract: The theory of electric polarization in crystals defines the dipole moment of an insulator in terms of a Berry phase (geometric phase) associated with its electronic ground state. This concept not only solves the long-standing puzzle of how to calculate dipole moments in crystals, but also explains topological band structures in insulators and superconductors, including the quantum anomalous Hall insulator and the quantum spin Hall insulator, as well as quantized adiabatic pumping processes. A recent theoretical study has extended the Berry phase framework to also account for higher electric multipole moments, revealing the existence of higher-order topological phases that have not previously been observed. Here we demonstrate experimentally a member of this predicted class of materials-a quantized quadrupole topological insulator-produced using a gigahertz-frequency reconfigurable microwave circuit. We confirm the non-trivial topological phase using spectroscopic measurements and by identifying corner states that result from the bulk topology. In addition, we test the critical prediction that these corner states are protected by the topology of the bulk, and are not due to surface artefacts, by deforming the edges of the crystal lattice from the topological to the trivial regime. Our results provide conclusive evidence of a unique form of robustness against disorder and deformation, which is characteristic of higher-order topological insulators.

Journal ArticleDOI
TL;DR: A deeper understanding of the fundamental challenges faced for wearable sensors and of the state-of-the-art for wearable sensor technology, the roadmap becomes clearer for creating the next generation of innovations and breakthroughs.
Abstract: Wearable sensors have recently seen a large increase in both research and commercialization. However, success in wearable sensors has been a mix of both progress and setbacks. Most of commercial progress has been in smart adaptation of existing mechanical, electrical and optical methods of measuring the body. This adaptation has involved innovations in how to miniaturize sensing technologies, how to make them conformal and flexible, and in the development of companion software that increases the value of the measured data. However, chemical sensing modalities have experienced greater challenges in commercial adoption, especially for non-invasive chemical sensors. There have also been significant challenges in making significant fundamental improvements to existing mechanical, electrical, and optical sensing modalities, especially in improving their specificity of detection. Many of these challenges can be understood by appreciating the body's surface (skin) as more of an information barrier than as an information source. With a deeper understanding of the fundamental challenges faced for wearable sensors and of the state-of-the-art for wearable sensor technology, the roadmap becomes clearer for creating the next generation of innovations and breakthroughs.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a dataset of short-exposure low-light images and reference images is introduced to support the development of learning-based pipelines for low-luminance image processing.
Abstract: Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce blur and is often impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effectiveness is limited in extreme conditions, such as video-rate imaging at night. To support the development of learning-based pipelines for low-light image processing, we introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work.

Journal ArticleDOI
TL;DR: An updated diagnostic algorithm for EoE was developed, with removal of the PPI trial requirement, and the evidence suggests that PPIs are better classified as a treatment for esophageal eosinophilia that may be due to EOE than as a diagnostic criterion.

Posted Content
TL;DR: A pipeline for processing low-light images is developed, based on end-to-end training of a fully-convolutional network that operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data.
Abstract: Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce blur and is often impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effectiveness is limited in extreme conditions, such as video-rate imaging at night. To support the development of learning-based pipelines for low-light image processing, we introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work. The results are shown in the supplementary video at this https URL

Journal ArticleDOI
TL;DR: A comprehensive review of recent advances in the field of oxygen reduction electrocatalysis utilizing nonprecious metal catalysts is presented and suggestions and direction for future research to develop and understand NPM catalysts with enhanced ORR activity are provided.
Abstract: A comprehensive review of recent advances in the field of oxygen reduction electrocatalysis utilizing nonprecious metal (NPM) catalysts is presented Progress in the synthesis and characterization of pyrolyzed catalysts, based primarily on the transition metals Fe and Co with sources of N and C, is summarized Several synthetic strategies to improve the catalytic activity for the oxygen reduction reaction (ORR) are highlighted Recent work to explain the active-site structures and the ORR mechanism on pyrolyzed NPM catalysts is discussed Additionally, the recent application of Cu-based catalysts for the ORR is reviewed Suggestions and direction for future research to develop and understand NPM catalysts with enhanced ORR activity are provided