Showing papers by "Texas A&M University published in 2019"
••
TL;DR: GO-CAM, a new framework for representing gene function that is more expressive than standard GO annotations, has been released, and users can now explore the growing repository of these models.
Abstract: The Gene Ontology resource (GO; http://geneontology.org) provides structured, computable knowledge regarding the functions of genes and gene products. Founded in 1998, GO has become widely adopted in the life sciences, and its contents are under continual improvement, both in quantity and in quality. Here, we report the major developments of the GO resource during the past two years. Each monthly release of the GO resource is now packaged and given a unique identifier (DOI), enabling GO-based analyses on a specific release to be reproduced in the future. The molecular function ontology has been refactored to better represent the overall activities of gene products, with a focus on transcription regulator activities. Quality assurance efforts have been ramped up to address potentially out-of-date or inaccurate annotations. New evidence codes for high-throughput experiments now enable users to filter out annotations obtained from these sources. GO-CAM, a new framework for representing gene function that is more expressive than standard GO annotations, has been released, and users can now explore the growing repository of these models. We also provide the ‘GO ribbon’ widget for visualizing GO annotations to a gene; the widget can be easily embedded in any web page.
2,138 citations
••
TL;DR: In this paper, an improved determination of the Hubble constant (H0) from HST observations of 70 long-period Cepheids in the Large Magellanic Cloud was presented.
Abstract: We present an improved determination of the Hubble constant (H0) from Hubble Space Telescope (HST) observations of 70 long-period Cepheids in the Large Magellanic Cloud. These were obtained with the same WFC3 photometric system used to measure Cepheids in the hosts of Type Ia supernovae. Gyroscopic control of HST was employed to reduce overheads while collecting a large sample of widely-separated Cepheids. The Cepheid Period-Luminosity relation provides a zeropoint-free link with 0.4% precision between the new 1.2% geometric distance to the LMC from Detached Eclipsing Binaries (DEBs) measured by Pietrzynski et al (2019) and the luminosity of SNe Ia. Measurements and analysis of the LMC Cepheids were completed prior to knowledge of the new LMC distance. Combined with a refined calibration of the count-rate linearity of WFC3-IR with 0.1% precision (Riess et al 2019), these three improved elements together reduce the full uncertainty in the LMC geometric calibration of the Cepheid distance ladder from 2.5% to 1.3%. Using only the LMC DEBs to calibrate the ladder we find H0=74.22 +/- 1.82 km/s/Mpc including systematic uncertainties, 3% higher than before for this particular anchor. Combining the LMC DEBs, masers in NGC 4258 and Milky Way parallaxes yields our best estimate: H0 = 74.03 +/- 1.42 km/s/Mpc, including systematics, an uncertainty of 1.91%---15% lower than our best previous result. Removing any one of these anchors changes H0 by < 0.7%. The difference between H0 measured locally and the value inferred from Planck CMB+LCDM is 6.6+/-1.5 km/s/Mpc or 4.4 sigma (P=99.999% for Gaussian errors) in significance, raising the discrepancy beyond a plausible level of chance. We summarize independent tests which show this discrepancy is not readily attributable to an error in any one source or measurement, increasing the odds that it results from a cosmological feature beyond LambdaCDM.
1,924 citations
••
TL;DR: In this article, the authors present a comprehensive study and evaluation of existing single image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called Realistic Single-Image DEhazing (RESIDE).
Abstract: We present a comprehensive study and evaluation of existing single-image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single-Image DEhazing (RESIDE). RESIDE highlights diverse data sources and image contents, and is divided into five subsets, each serving different training or evaluation purposes. We further provide a rich variety of criteria for dehazing algorithm evaluation, ranging from full-reference metrics to no-reference metrics and to subjective evaluation, and the novel task-driven evaluation. Experiments on RESIDE shed light on the comparisons and limitations of the state-of-the-art dehazing algorithms, and suggest promising future directions.
922 citations
••
TL;DR: The application and evolution of RE-AIM is described as well as lessons learned from its use, with increasing the emphasis on cost and adaptations to programs and expanding the use of qualitative methods to understand “how” and “why” results came about.
Abstract: The RE-AIM planning and evaluation framework was conceptualized two decades ago. As one of the most frequently applied implementation frameworks, RE-AIM has now been cited in over 2,800 publications. This paper describes the application and evolution of RE-AIM as well as lessons learned from its use. RE-AIM has been applied most often in public health and health behavior change research, but increasingly in more diverse content areas and within clinical, community, and corporate settings. We discuss challenges of using RE-AIM while encouraging a more pragmatic use of key dimensions rather than comprehensive applications of all elements. Current foci of RE-AIM include increasing the emphasis on cost and adaptations to programs and expanding the use of qualitative methods to understand "how" and "why" results came about. The framework will continue to evolve to focus on contextual and explanatory factors related to RE-AIM outcomes, package RE-AIM for use by non-researchers, and integrate RE-AIM with other pragmatic and reporting frameworks.
819 citations
••
TL;DR: In this paper, the authors provide a survey covering existing techniques to increase interpretability of machine learning models and discuss crucial issues that the community should consider in future work such as designing user-friendly explanations and developing comprehensive evaluation metrics to further push forward the area of interpretable machine learning.
Abstract: Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these models arrive at a particular decision. Although many approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. We provide a survey covering existing techniques to increase the interpretability of machine learning models. We also discuss crucial issues that the community should consider in future work such as designing user-friendly explanations and developing comprehensive evaluation metrics to further push forward the area of interpretable machine learning.
759 citations
••
TL;DR: This review summarizes the latest advances in this emerging field of "bio-integrated" technologies in a comprehensive manner that connects fundamental developments in chemistry, material science, and engineering with sensing technologies that have the potential for widespread deployment and societal benefit in human health care.
Abstract: Bio-integrated wearable systems can measure a broad range of biophysical, biochemical, and environmental signals to provide critical insights into overall health status and to quantify human performance. Recent advances in material science, chemical analysis techniques, device designs, and assembly methods form the foundations for a uniquely differentiated type of wearable technology, characterized by noninvasive, intimate integration with the soft, curved, time-dynamic surfaces of the body. This review summarizes the latest advances in this emerging field of “bio-integrated” technologies in a comprehensive manner that connects fundamental developments in chemistry, material science, and engineering with sensing technologies that have the potential for widespread deployment and societal benefit in human health care. An introduction to the chemistries and materials for the active components of these systems contextualizes essential design considerations for sensors and associated platforms that appear in f...
727 citations
••
TL;DR: Polygenic risk scores method PRS-CS is presented, a polygenic prediction method that infers posterior effect sizes of single nucleotide polymorphisms (SNPs) using genome-wide association summary statistics and an external linkage disequilibrium (LD) reference panel.
Abstract: Polygenic risk scores (PRS) have shown promise in predicting human complex traits and diseases. Here, we present PRS-CS, a polygenic prediction method that infers posterior effect sizes of single nucleotide polymorphisms (SNPs) using genome-wide association summary statistics and an external linkage disequilibrium (LD) reference panel. PRS-CS utilizes a high-dimensional Bayesian regression framework, and is distinct from previous work by placing a continuous shrinkage (CS) prior on SNP effect sizes, which is robust to varying genetic architectures, provides substantial computational advantages, and enables multivariate modeling of local LD patterns. Simulation studies using data from the UK Biobank show that PRS-CS outperforms existing methods across a wide range of genetic architectures, especially when the training sample size is large. We apply PRS-CS to predict six common complex diseases and six quantitative traits in the Partners HealthCare Biobank, and further demonstrate the improvement of PRS-CS in prediction accuracy over alternative methods.
681 citations
••
TL;DR: This review focuses on the aging-related structural changes and mechanisms at cellular and subcellular levels underlying changes in the individual motor unit: specifically, the perikaryon of the α-motoneuron, its neuromuscular junction(s), and the muscle fibers that it innervates.
Abstract: Sarcopenia is a loss of muscle mass and function in the elderly that reduces mobility, diminishes quality of life, and can lead to fall-related injuries, which require costly hospitalization and ex...
630 citations
••
10 Aug 2019TL;DR: It is demonstrated that DeblurGAN-V2 has very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency, and is effective for general image restoration tasks too.
Abstract: We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-V2, which considerably boosts state-of-the-art deblurring performance while being much more flexible and efficient. DeblurGAN-V2 is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-V2. It can flexibly work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of sophisticated backbones (e.g. Inception ResNet v2) can lead to solid state-of-the-art performance. Meanwhile, with light-weight backbones (e.g. MobileNet and its variants), DeblurGAN-V2 becomes 10-100 times faster than the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time video deblurring. We demonstrate that DeblurGAN-V2 has very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. In addition, we show the architecture to be effective for general image restoration tasks too. Our models and codes will be made available upon acceptance.
592 citations
••
TL;DR: It is highlighted that improved understanding of the emission sources, physical/chemical processes during haze evolution, and interactions with meteorological/climatic changes are necessary to unravel the causes, mechanisms, and trends for haze pollution.
Abstract: Regional severe haze represents an enormous environmental problem in China, influencing air quality, human health, ecosystem, weather, and climate. These extremes are characterized by exceedingly high concentrations of fine particulate matter (smaller than 2.5 µm, or PM2.5) and occur with extensive temporal (on a daily, weekly, to monthly timescale) and spatial (over a million square kilometers) coverage. Although significant advances have been made in field measurements, model simulations, and laboratory experiments for fine PM over recent years, the causes for severe haze formation have not yet to be systematically/comprehensively evaluated. This review provides a synthetic synopsis of recent advances in understanding the fundamental mechanisms of severe haze formation in northern China, focusing on emission sources, chemical formation and transformation, and meteorological and climatic conditions. In particular, we highlight the synergetic effects from the interactions between anthropogenic emissions and atmospheric processes. Current challenges and future research directions to improve the understanding of severe haze pollution as well as plausible regulatory implications on a scientific basis are also discussed.
586 citations
••
25 Jul 2019TL;DR: In this article, the authors propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search, which keeps the functionality of a neural network while changing its neural architecture, enabling more efficient training during the search.
Abstract: Neural architecture search (NAS) has been proposed to automatically tune deep neural networks, but existing search algorithms, e.g., NASNet, PNAS, usually suffer from expensive computational cost. Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling more efficient training during the search. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search. The framework develops a neural network kernel and a tree-structured acquisition function optimization algorithm to efficiently explores the search space. Extensive experiments on real-world benchmark datasets have been done to demonstrate the superior performance of the developed framework over the state-of-the-art methods. Moreover, we build an open-source AutoML system based on our method, namely Auto-Keras. The code and documentation are available at https://autokeras.com. The system runs in parallel on CPU and GPU, with an adaptive search strategy for different GPU memory limits.
••
Chinese Academy of Sciences1, University of California, Los Angeles2, University of Gothenburg3, Ohio State University4, University of Maryland, College Park5, Fudan University6, University of California, Santa Barbara7, Peking University8, Japan Agency for Marine-Earth Science and Technology9, University of Tsukuba10, Nanjing University11, San Diego State University12, University of Twente13, National Center for Atmospheric Research14, Texas A&M University15, Pacific Northwest National Laboratory16, Chengdu University of Information Technology17
TL;DR: The Third Pole (TP) is experiencing rapid warming and is currently in its warmest period in the past 2,000 years as mentioned in this paper, and the latest development in multidisciplinary TP research is reviewed in this paper.
Abstract: The Third Pole (TP) is experiencing rapid warming and is currently in its warmest period in the past 2,000 years. This paper reviews the latest development in multidisciplinary TP research ...
••
TL;DR: Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented and constraints are placed on various two Higgs doublet models.
Abstract: Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $\sqrt{s}=13\,\text {Te}\text {V} $ , corresponding to an integrated luminosity of 35.9 ${\,\text {fb}^{-1}} $ . The combination is based on analyses targeting the five main Higgs boson production mechanisms (gluon fusion, vector boson fusion, and associated production with a $\mathrm {W}$ or $\mathrm {Z}$ boson, or a top quark-antiquark pair) and the following decay modes: $\mathrm {H} \rightarrow \gamma \gamma $ , $\mathrm {Z}\mathrm {Z}$ , $\mathrm {W}\mathrm {W}$ , $\mathrm {\tau }\mathrm {\tau }$ , $\mathrm {b} \mathrm {b} $ , and $\mathrm {\mu }\mathrm {\mu }$ . Searches for invisible Higgs boson decays are also considered. The best-fit ratio of the signal yield to the standard model expectation is measured to be $\mu =1.17\pm 0.10$ , assuming a Higgs boson mass of $125.09\,\text {Ge}\text {V} $ . Additional results are given for various assumptions on the scaling behavior of the production and decay modes, including generic parametrizations based on ratios of cross sections and branching fractions or couplings. The results are compatible with the standard model predictions in all parametrizations considered. In addition, constraints are placed on various two Higgs doublet models.
••
Veterans Health Administration1, University of California, San Diego2, University of Montana3, J. Craig Venter Institute4, Wellcome Trust Sanger Institute5, Texas A&M University6, University of Illinois at Urbana–Champaign7, University of Pittsburgh8, Universidad Autónoma de Nuevo León9, Columbia University10, University of Alberta11, Cornell University12, Autonomous University of Barcelona13, King's College London14, French Institute of Health and Medical Research15, University of Wisconsin-Madison16, Yale University17, Université catholique de Louvain18
TL;DR: It is shown that bacteriophages can specifically target cytolytic E. faecalis, which provides a method for precisely editing the intestinal microbiota, and is linked with more severe clinical outcomes and increased mortality in patients with alcoholic hepatitis.
Abstract: Chronic liver disease due to alcohol-use disorder contributes markedly to the global burden of disease and mortality1–3. Alcoholic hepatitis is a severe and life-threatening form of alcohol-associated liver disease. The gut microbiota promotes ethanol-induced liver disease in mice4, but little is known about the microbial factors that are responsible for this process. Here we identify cytolysin—a two-subunit exotoxin that is secreted by Enterococcus faecalis5,6—as a cause of hepatocyte death and liver injury. Compared with non-alcoholic individuals or patients with alcohol-use disorder, patients with alcoholic hepatitis have increased faecal numbers of E. faecalis. The presence of cytolysin-positive (cytolytic) E. faecalis correlated with the severity of liver disease and with mortality in patients with alcoholic hepatitis. Using humanized mice that were colonized with bacteria from the faeces of patients with alcoholic hepatitis, we investigated the therapeutic effects of bacteriophages that target cytolytic E. faecalis. We found that these bacteriophages decrease cytolysin in the liver and abolish ethanol-induced liver disease in humanized mice. Our findings link cytolytic E. faecalis with more severe clinical outcomes and increased mortality in patients with alcoholic hepatitis. We show that bacteriophages can specifically target cytolytic E. faecalis, which provides a method for precisely editing the intestinal microbiota. A clinical trial with a larger cohort is required to validate the relevance of our findings in humans, and to test whether this therapeutic approach is effective for patients with alcoholic hepatitis. In patients with alcoholic hepatitis, cytolysin-positive Enterococcus faecalis strains are correlated with liver disease severity and increased mortality, and in mouse models these strains can be specifically targeted by bacteriophages.
••
Hobart Corporation1, Ocean University of China2, National Institute of Oceanography, India3, University of Paris4, Monash University, Clayton campus5, Pohang University of Science and Technology6, University of California, Irvine7, Pusan National University8, University of New South Wales9, Chinese Academy of Sciences10, Chonnam National University11, Utah State University12, Pacific Marine Environmental Laboratory13, Monash University14, University of Exeter15, Geophysical Institute, University of Bergen16, Bjerknes Centre for Climate Research17, Nanjing University of Information Science and Technology18, Complutense University of Madrid19, Centre national de la recherche scientifique20, Barcelona Supercomputing Center21, University of California, San Diego22, Beijing Normal University23, Ulsan National Institute of Science and Technology24, Texas A&M University25
TL;DR: Advances in the understanding of pantropical interbasin climate interactions are reviewed and their implications for both climate prediction and future climate projections are reviewed.
Abstract: The El Nino-Southern Oscillation (ENSO), which originates in the Pacific, is the strongest and most well-known mode of tropical climate variability. Its reach is global, and it can force climate variations of the tropical Atlantic and Indian Oceans by perturbing the global atmospheric circulation. Less appreciated is how the tropical Atlantic and Indian Oceans affect the Pacific. Especially noteworthy is the multidecadal Atlantic warming that began in the late 1990s, because recent research suggests that it has influenced Indo-Pacific climate, the character of the ENSO cycle, and the hiatus in global surface warming. Discovery of these pantropical interactions provides a pathway forward for improving predictions of climate variability in the current climate and for refining projections of future climate under different anthropogenic forcing scenarios.
••
Cooperative Institute for Research in Environmental Sciences1, Earth System Research Laboratory2, Texas A&M University3, Met Office4, University of Melbourne5, Oeschger Centre for Climate Change Research6, Rovira i Virgili University7, University of East Anglia8, National Research Council9, National Oceanography Centre, Southampton10, National Center for Atmospheric Research11, Spanish National Research Council12, Australian National University13, University of Reading14, Lamont–Doherty Earth Observatory15, Hokkaido University16, National Institute of Water and Atmospheric Research17, University of Giessen18, University of Milan19, University of South Carolina20, University of Toronto21, Nicolaus Copernicus University in Toruń22, University of Southern Queensland23, University of Cape Town24, McGill University25, Deutscher Wetterdienst26, University of Lisbon27, Environment Canada28, Pacific Marine Environmental Laboratory29, Joint Institute for the Study of the Atmosphere and Ocean30
TL;DR: The 20CRv2c dataset as mentioned in this paper is the first ensemble of sub-daily global atmospheric conditions spanning over 100 years, which provides a best estimate of the weather at any given place and time as well as an estimate of its confidence and uncertainty.
Abstract: Historical reanalyses that span more than a century are needed for a wide range of studies, from understanding large‐scale climate trends to diagnosing the impacts of individual historical extreme weather events. The Twentieth Century Reanalysis (20CR) Project is an effort to fill this need. It is supported by the National Oceanic and Atmospheric Administration (NOAA), the Cooperative Institute for Research in Environmental Sciences (CIRES), and the U.S. Department of Energy (DOE), and is facilitated by collaboration with the international Atmospheric Circulation Reconstructions over the Earth initiative. 20CR is the first ensemble of sub‐daily global atmospheric conditions spanning over 100 years. This provides a best estimate of the weather at any given place and time as well as an estimate of its confidence and uncertainty. While extremely useful, version 2c of this dataset (20CRv2c) has several significant issues, including inaccurate estimates of confidence and a global sea level pressure bias in the mid‐19th century. These and other issues can reduce its effectiveness for studies at many spatial and temporal scales. Therefore, the 20CR system underwent a series of developments to generate a significant new version of the reanalysis. The version 3 system (NOAA‐CIRES‐DOE 20CRv3) uses upgraded data assimilation methods including an adaptive inflation algorithm; has a newer, higher‐resolution forecast model that specifies dry air mass; and assimilates a larger set of pressure observations. These changes have improved the ensemble‐based estimates of confidence, removed spin‐up effects in the precipitation fields, and diminished the sea‐level pressure bias. Other improvements include more accurate representations of storm intensity, smaller errors, and large‐scale reductions in model bias. The 20CRv3 system is comprehensively reviewed, focusing on the aspects that have ameliorated issues in 20CRv2c. Despite the many improvements, some challenges remain, including a systematic bias in tropical precipitation and time‐varying biases in southern high‐latitude pressure fields.
••
TL;DR: An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
Abstract: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000X fewer views. We demonstrate our approach's practicality with an augmented reality smart-phone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.
••
03 Aug 2019TL;DR: An Attentive but Diverse Network (ABD-Net), which seamlessly integrates attention modules and diversity regularizations throughout the entire network to learn features that are representative, robust, and more discriminative.
Abstract: Attention mechanisms have been found effective for person re-identification (Re-ID). However, the learned ``attentive'' features are often not naturally uncorrelated or ``diverse'', which compromises the retrieval performance based on the Euclidean distance. We advocate the complementary powers of attention and diversity for Re-ID, by proposing an Attentive but Diverse Network (ABD-Net). ABD-Net seamlessly integrates attention modules and diversity regularizations throughout the entire network to learn features that are representative, robust, and more discriminative. Specifically, we introduce a pair of complementary attention modules, focusing on channel aggregation and position awareness, respectively. Then, we plug in a novel orthogonality constraint that efficiently enforces diversity on both hidden activations and weights. Through an extensive set of ablation study, we verify that the attentive and diverse terms each contributes to the performance boosts of ABD-Net. It consistently outperforms existing state-of-the-art methods on there popular person Re-ID benchmarks.
••
TL;DR: The biological interplay between gut-brain axis is discussed, and how this communication may be dysregulated in neurological diseases is explored, and new insights in modification of gut microbiota composition are highlighted.
Abstract: Development of central nervous system (CNS) is regulated by both intrinsic and peripheral signals. Previous studies have suggested that environmental factors affect neurological activities under both physiological and pathological conditions. Although there is anatomical separation, emerging evidence has indicated the existence of bidirectional interaction between gut microbiota, i.e., (diverse microorganisms colonizing human intestine), and brain. The cross-talk between gut microbiota and brain may have crucial impact during basic neurogenerative processes, in neurodegenerative disorders and tumors of CNS. In this review, we discuss the biological interplay between gut-brain axis, and further explore how this communication may be dysregulated in neurological diseases. Further, we highlight new insights in modification of gut microbiota composition, which may emerge as a promising therapeutic approach to treat CNS disorders.
••
TL;DR: In this paper, the authors presented MERIT Hydro, a new global flow direction map at 3-arc sec resolution (90 m at the equator) derived from the latest elevation data (MERIT DEM) and water body data sets (G1WBM, Global Surface Water Occurrence, and OpenStreetMap).
Abstract: High‐resolution raster hydrography maps are a fundamental data source for many geoscience applications. Here we introduce MERIT Hydro, a new global flow direction map at 3‐arc sec resolution (~90 m at the equator) derived from the latest elevation data (MERIT DEM) and water body data sets (G1WBM, Global Surface Water Occurrence, and OpenStreetMap). We developed a new algorithm to extract river networks near automatically by separating actual inland basins from dummy depressions caused by the errors in input elevation data. After a minimum amount of hand editing, the constructed hydrography map shows good agreement with existing quality‐controlled river network data sets in terms of flow accumulation area and river basin shape. The location of river streamlines was realistically aligned with existing satellite‐based global river channel data. Relative error in the drainage area was <0.05 for 90% of Global Runoff Data Center (GRDC) gauges, confirming the accuracy of the delineated global river networks. Discrepancies in flow accumulation area were found mostly in arid river basins containing depressions that are occasionally connected at high water levels and thus resulting in uncertain watershed boundaries. MERIT Hydro improves on existing global hydrography data sets in terms of spatial coverage (between N90 and S60) and representation of small streams, mainly due to increased availability of high‐quality baseline geospatial data sets. The new flow direction and flow accumulation maps, along with accompanying supplementary layers on hydrologically adjusted elevation and channel width, will advance geoscience studies related to river hydrology at both global and local scales. Plain Language Summary Rivers play important roles in global hydrological and biogeochemical cycles, and many socioeconomic activities also depend on water resources in river basins. Global‐scale frontier studies of river networks and surface waters require that all rivers on the Earth are precisely mapped at high resolution, but until now, no such map has been produced. Here we present “MERIT Hydro,” the first high‐resolution, global map of river networks developed by combining the latest global map of land surface elevation with the latest maps of water bodies that were built using satellites and open databases. Surface flow direction of each 3‐arc sec pixel (~90‐m size at the equator) is mapped across the entire globe except Antarctica, and many supplemental maps (such as flow accumulation area, river width, and a vectorized river network) are generated. MERIT Hydro thus represents a major advance in our ability to represent the global river network and is a data set that is anticipated to enhance a wide range of geoscience applications including flood risk assessment, aquatic carbon emissions, and climate modeling.
••
TL;DR: In this article, a search for invisible decays of a Higgs boson via vector boson fusion is performed using proton-proton collision data collected with the CMS detector at the LHC in 2016 at a center-of-mass energy root s = 13 TeV, corresponding to an integrated luminosity of 35.9fb(-1).
••
07 Aug 2019TL;DR: In this article, the authors demonstrate an effective method to prevent the oxidation of colloidal Ti3C2Tx MXene nanosheets by using sodium L-ascorbate as an antioxidant.
Abstract: Summary Although MXene nanosheets have attracted significant scientific and industrial attention, these materials are highly susceptible to oxidation, which leads to their chemical degradation and loss of functional properties in a matter of days. Here we demonstrate an effective method to prevent the oxidation of colloidal Ti3C2Tx MXene nanosheets by using sodium L-ascorbate as an antioxidant. The success of the method is evident in the stable morphology, structure, and colloidal stability of Ti3C2Tx. Even in the presence of water and oxygen, the electrical conductivity of Ti3C2Tx nanosheets treated with sodium L-ascorbate was orders of magnitude higher as compared with untreated ones after 21 days. This resistance to oxidation also persists in the dried state. We propose that the sodium L-ascorbate protects the edges of the nanosheets, restricting water molecules from otherwise reactive sites; this is supported by molecular dynamics simulations that show association of the ascorbate anion with the nanosheet edge.
•
TL;DR: An algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields.
Abstract: We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Previous approaches either require intractably dense view sampling or provide little to no guidance for how users should sample views of a scene to reliably render high-quality novel views. Instead, we propose an algorithm for view synthesis from an irregular grid of sampled views that first expands each sampled view into a local light field via a multiplane image (MPI) scene representation, then renders novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views. We demonstrate our approach's practicality with an augmented reality smartphone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.
••
TL;DR: The synthesis and porosity of MOFs are first introduced by some representative examples that pertain to the field of food safety, and the application of MOF and MOF-based materials in food safety monitoring, food processing, covering preservation, sanitation, and packaging is overviewed.
Abstract: Food safety is a prevalent concern around the world. As such, detection, removal, and control of risks and hazardous substances present from harvest to consumption will always be necessary. Metal-organic frameworks (MOFs), a class of functional materials, possess unique physical and chemical properties, demonstrating promise in food safety applications. In this review, the synthesis and porosity of MOFs are first introduced by some representative examples that pertain to the field of food safety. Following that, the application of MOFs and MOF-based materials in food safety monitoring, food processing, covering preservation, sanitation, and packaging is overviewed. Future perspectives, as well as potential opportunities and challenges faced by MOFs in this field will also be discussed. This review aims to promote the development and progress of MOF chemistry and application research in the field of food safety, potentially leading to novel solutions.
••
Joint Genome Institute1, University of Liverpool2, Radboud University Nijmegen3, University of Guelph4, Catholic University of Leuven5, University of Cape Town6, Arizona State University7, European Bioinformatics Institute8, Cairo University9, Vanderbilt University10, University of South Florida11, Colorado State University12, University of Michigan13, University of California, Davis14, University of Auvergne15, University of Southern California16, University of Queensland17, University of Arizona18, Texas A&M University19, National Institute of Genetics20, University of Alicante21, Kyoto University22, Université Paris-Saclay23, University of Chicago24, University of Los Andes25, Universidad Miguel Hernández de Elche26, University of Maryland, Baltimore27, University of Hawaii at Manoa28, Ohio State University29, École Polytechnique Fédérale de Lausanne30, University of British Columbia31, University of Exeter32, Oregon State University33, Australian Institute of Marine Science34, University of California, Irvine35, University of Tennessee36, University of Delaware37, Max Planck Society38, Montana State University39, J. Craig Venter Institute40, University of California, San Diego41
TL;DR: The MIUViG (Minimum Information about an Uncultivated Virus Genome) as mentioned in this paper standard was developed within the Genomic Standards Consortium framework and includes virus origin, genome quality, genome annotation, taxonomic classification, biogeographic distribution and in silico host prediction.
Abstract: We present an extension of the Minimum Information about any (x) Sequence (MIxS) standard for reporting sequences of uncultivated virus genomes. Minimum Information about an Uncultivated Virus Genome (MIUViG) standards were developed within the Genomic Standards Consortium framework and include virus origin, genome quality, genome annotation, taxonomic classification, biogeographic distribution and in silico host prediction. Community-wide adoption of MIUViG standards, which complement the Minimum Information about a Single Amplified Genome (MISAG) and Metagenome-Assembled Genome (MIMAG) standards for uncultivated bacteria and archaea, will improve the reporting of uncultivated virus genomes in public databases. In turn, this should enable more robust comparative studies and a systematic exploration of the global virosphere.
••
TL;DR: A miniaturized bio-optoelectronic implant using an optical stimulation interface that exploits microscale inorganic light-emitting diodes to activate opsins and a soft, high-precision biophysical sensor system that allows continuous measurements of organ function is introduced.
Abstract: The fast-growing field of bioelectronic medicine aims to develop engineered systems that can relieve clinical conditions by stimulating the peripheral nervous system1–5. This type of technology relies largely on electrical stimulation to provide neuromodulation of organ function or pain. One example is sacral nerve stimulation to treat overactive bladder, urinary incontinence and interstitial cystitis (also known as bladder pain syndrome)4,6,7. Conventional, continuous stimulation protocols, however, can cause discomfort and pain, particularly when treating symptoms that can be intermittent (for example, sudden urinary urgency)8. Direct physical coupling of electrodes to the nerve can lead to injury and inflammation9–11. Furthermore, typical therapeutic stimulators target large nerve bundles that innervate multiple structures, resulting in a lack of organ specificity. Here we introduce a miniaturized bio-optoelectronic implant that avoids these limitations by using (1) an optical stimulation interface that exploits microscale inorganic light-emitting diodes to activate opsins; (2) a soft, high-precision biophysical sensor system that allows continuous measurements of organ function; and (3) a control module and data analytics approach that enables coordinated, closed-loop operation of the system to eliminate pathological behaviours as they occur in real-time. In the example reported here, a soft strain gauge yields real-time information on bladder function in a rat model. Data algorithms identify pathological behaviour, and automated, closed-loop optogenetic neuromodulation of bladder sensory afferents normalizes bladder function. This all-optical scheme for neuromodulation offers chronic stability and the potential to stimulate specific cell types. A closed-loop implantable bioelectronic device that can modulate peripheral neuronal activity is used to improve bladder function in a rat model of cystitis.
••
TL;DR: The authors provide a reference genome assembly, and show that gene expansion is involved in the regulation of frequent molting as well as benthic adaptation of the shrimp.
Abstract: Crustacea, the subphylum of Arthropoda which dominates the aquatic environment, is of major importance in ecology and fisheries. Here we report the genome sequence of the Pacific white shrimp Litopenaeus vannamei, covering ~1.66 Gb (scaffold N50 605.56 Kb) with 25,596 protein-coding genes and a high proportion of simple sequence repeats (>23.93%). The expansion of genes related to vision and locomotion is probably central to its benthic adaptation. Frequent molting of the shrimp may be explained by an intensified ecdysone signal pathway through gene expansion and positive selection. As an important aquaculture organism, L. vannamei has been subjected to high selection pressure during the past 30 years of breeding, and this has had a considerable impact on its genome. Decoding the L. vannamei genome not only provides an insight into the genetic underpinnings of specific biological processes, but also provides valuable information for enhancing crustacean aquaculture. The Pacific white shrimp Litopenaeus vannamei is an important aquaculture species and a promising model for crustacean biology. Here, the authors provide a reference genome assembly, and show that gene expansion is involved in the regulation of frequent molting as well as benthic adaptation of the shrimp.
•
[...]
TL;DR: Li et al. as mentioned in this paper proposed graph pooling and unpooling operations for graph representation learning and applied them to node classification and graph classification tasks on graph data and achieved better performance than previous methods.
Abstract: We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Nets have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Nets. Our experimental results on node classification and graph classification tasks demonstrate that our methods achieve consistently better performance than previous models.
••
University of Florida1, University of Montpellier2, University of Göttingen3, University of Melbourne4, University of Auvergne5, University of Sassari6, International Maize and Wheat Improvement Center7, Commonwealth Scientific and Industrial Research Organisation8, Goddard Institute for Space Studies9, Pir Mehr Ali Shah Arid Agriculture University10, Washington State University11, International Institute for Applied Systems Analysis12, Comenius University in Bratislava13, Michigan State University14, University of Florence15, James Hutton Institute16, CGIAR17, University of Leeds18, European Food Safety Authority19, Gembloux Agro-Bio Tech20, University of Bonn21, Spanish National Research Council22, University of Hohenheim23, University of Maryland, College Park24, Texas A&M University25, Aarhus University26, Nanjing Agricultural University27, Potsdam Institute for Climate Impact Research28, University of Copenhagen29, Indian Agricultural Research Institute30, SupAgro31, Lincoln University (New Zealand)32, Institut national de la recherche agronomique33, Rothamsted Research34, Wageningen University and Research Centre35, Chinese Academy of Sciences36, Beijing Normal University37, China Agricultural University38
TL;DR: A 32-multi-model ensemble is tested and applied to simulate global wheat yield and quality in a changing climate to potential benefits of elevated atmospheric CO2 concentration by 2050, likely to be negated by impacts from rising temperature and changes in rainfall, but with considerable disparities between regions.
Abstract: Wheat grain protein concentration is an important determinant of wheat quality for human nutrition that is often overlooked in efforts to improve crop production. We tested and applied a 32‐multi‐model ensemble to simulate global wheat yield and quality in a changing climate. Potential benefits of elevated atmospheric CO2 concentration by 2050 on global wheat grain and protein yield are likely to be negated by impacts from rising temperature and changes in rainfall, but with considerable disparities between regions. Grain and protein yields are expected to be lower and more variable in most low‐rainfall regions, with nitrogen availability limiting growth stimulus from elevated CO2. Introducing genotypes adapted to warmer temperatures (and also considering changes in CO2 and rainfall) could boost global wheat yield by 7% and protein yield by 2%, but grain protein concentration would be reduced by −1.1 percentage points, representing a relative change of −8.6%. Climate change adaptations that benefit grain yield are not always positive for grain quality, putting additional pressure on global wheat production.
••
TL;DR: The Fujitsu Digital Annealer as mentioned in this paper is designed to solve fully connected quadratic unconstrained binary optimization (QUBO) problems and is implemented on application-specific CMOS hardware and currently solves problems of up to 1024 variables.
Abstract: The Fujitsu Digital Annealer is designed to solve fully connected quadratic unconstrained binary optimization (QUBO) problems. It is implemented on application-specific CMOS hardware and currently solves problems of up to 1024 variables. The Digital Annealer's algorithm is currently based on simulated annealing; however, it differs from it in its utilization of an efficient parallel-trial scheme and a dynamic escape mechanism. In addition, the Digital Annealer exploits the massive parallelization that custom application-specific CMOS hardware allows. We compare the performance of the Digital Annealer to simulated annealing and parallel tempering with isoenergetic cluster moves on two-dimensional and fully connected spin-glass problems with bimodal and Gaussian couplings. These represent the respective limits of sparse versus dense problems, as well as high-degeneracy versus low-degeneracy problems. Our results show that the Digital Annealer currently exhibits a time-to-solution speedup of roughly two orders of magnitude for fully connected spin-glass problems with bimodal or Gaussian couplings, over the single-core implementations of simulated annealing and parallel tempering Monte Carlo used in this study. The Digital Annealer does not appear to exhibit a speedup for sparse two-dimensional spin-glass problems, which we explain on theoretical grounds. We also benchmarked an early implementation of the Parallel Tempering Digital Annealer. Our results suggest an improved scaling over the other algorithms for fully connected problems of average difficulty with bimodal disorder. The next generation of the Digital Annealer is expected to be able to solve fully connected problems up to 8192 variables in size. This would enable the study of fundamental physics problems and industrial applications that were previously inaccessible using standard computing hardware or special-purpose quantum annealing machines.