scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
22 Dec 2017-Science
TL;DR: In this paper, the authors present ultraviolet, optical, and infrared light curves of SSS17a extending from 10.9 hours to 18 days post-merger, showing that the late-time light curve indicates that SSS 17a produced at least 0.05 solar masses of heavy elements, demonstrating that neutron star mergers play a role in rapid neutron capture (r-process) nucleosynthesis in the universe.
Abstract: On 17 August 2017, gravitational waves (GWs) were detected from a binary neutron star merger, GW170817, along with a coincident short gamma-ray burst, GRB 170817A. An optical transient source, Swope Supernova Survey 17a (SSS17a), was subsequently identified as the counterpart of this event. We present ultraviolet, optical, and infrared light curves of SSS17a extending from 10.9 hours to 18 days postmerger. We constrain the radioactively powered transient resulting from the ejection of neutron-rich material. The fast rise of the light curves, subsequent decay, and rapid color evolution are consistent with multiple ejecta components of differing lanthanide abundance. The late-time light curve indicates that SSS17a produced at least ~0.05 solar masses of heavy elements, demonstrating that neutron star mergers play a role in rapid neutron capture (r-process) nucleosynthesis in the universe.

582 citations


Journal ArticleDOI
TL;DR: The authors identifies the issues facing water managers today and future research needed to better inform those who strive to create a more sustainable and desirable future, especially given a changing and uncertain future climate, and a rapidly growing population that is driving increased social and economic development, globalization, and urbanization.
Abstract: Water distinguishes our planet compared to all the others we know about. While the global supply of available freshwater is more than adequate to meet all current and foreseeable water demands, its spatial and temporal distributions are not. There are many regions where our freshwater resources are inadequate to meet domestic, economic development and environmental needs. In such regions, the lack of adequate clean water to meet human drinking water and sanitation needs is indeed a constraint on human health and productivity and hence on economic development as well as on the maintenance of a clean environment and healthy ecosystems. All of us involved in research must find ways to remove these constraints. We face multiple challenges in doing that, especially given a changing and uncertain future climate, and a rapidly growing population that is driving increased social and economic development, globalization, and urbanization. How best to meet these challenges requires research in all aspects of water management. Since 1965, the journal Water Resources Research has played an important role in reporting and disseminating current research related to managing the quantity and quality and cost of this resource. This paper identifies the issues facing water managers today and future research needed to better inform those who strive to create a more sustainable and desirable future.

582 citations


Journal ArticleDOI
01 Mar 2018
TL;DR: In this article, a survey-based research study examines student perception on various engagement strategies used in online courses based on Moore's interaction framework and concludes that student engagement increases student satisfaction, enhances student motivation to learn, reduces the sense of isolation, and improves student performance.
Abstract: Student engagement increases student satisfaction, enhances student motivation to learn, reduces the sense of isolation, and improves student performance in online courses. This survey-based research study examines student perception on various engagement strategies used in online courses based on Moore’s interaction framework. One hundred and fifty five students completed a 38-item survey on learner-to-learner, learner-to-instructor, and learner-to-content engagement strategies. Learner-to-instructor engagement strategies seemed to be most valued among the three categories. Icebreaker/introduction discussions and working collaboratively using online communication tools was rated the most beneficial engagement strategy in the learner-to-learner category, whereas sending regular announcements or e-mail reminders and providing grading rubrics for all assignments was rated the most beneficial in learner to instructor category. In the student-content category, students mentioned working on real world projects and having discussions with structured or guiding questions were the most beneficial. This study also analyzed age, gender, and online learning years of experience differences on their perception of engagement strategies. The results of the study have implications for online instructors, instructional designers, and administrators who wish to enhance engagement in the online courses.

582 citations


Journal ArticleDOI
TL;DR: The Lancet Countdown tracks 41 indicators across five domains: climate change impacts, exposures, and vulnerability; adaptation, planning, and resilience for health; mitigation actions and health co-benefits; finance and economics; and public and political engagement.

582 citations


Journal ArticleDOI
12 May 2015-ACS Nano
TL;DR: This review highlights the latest advances in nanocomposite hydrogels as drug delivery vehicles and the inclusion/incorporation of nanoparticles in three-dimensional polymeric structures is an innovative means for obtaining multicomponent systems with diverse functionality within a hybrid hydrogel network.
Abstract: Considerable progress in the synthesis and technology of hydrogels makes these materials attractive structures for designing controlled-release drug delivery systems. In particular, this review highlights the latest advances in nanocomposite hydrogels as drug delivery vehicles. The inclusion/incorporation of nanoparticles in three-dimensional polymeric structures is an innovative means for obtaining multicomponent systems with diverse functionality within a hybrid hydrogel network. Nanoparticle-hydrogel combinations add synergistic benefits to the new 3D structures. Nanogels as carriers for cancer therapy and injectable gels with improved self-healing properties have also been described as new nanocomposite systems.

582 citations


Posted Content
TL;DR: This paper studies a D-PSGD algorithm and provides the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent.
Abstract: Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

582 citations


Journal ArticleDOI
TL;DR: In this paper, the performances of traditional technologies and nanotechnology for water treatment and environmental remediation were compared with the goal of providing an up-to-date reference on the state of treatment techniques for researchers, industry, and policy makers.

582 citations


Journal ArticleDOI
TL;DR: A density functional theory that accounts for van der Waals interactions in condensed matter, materials physics, chemistry, and biology is reviewed and the value of the vdW-DF method as a general-purpose method, not only for dispersion bound systems, but also in densely packed systems where these types of interactions are traditionally thought to be negligible.
Abstract: A density functional theory (DFT) that accounts for van der Waals (vdW) interactions in condensed matter, materials physics, chemistry, and biology is reviewed. The insights that led to the construction of the Rutgers–Chalmers van der Waals density functional (vdW-DF) are presented with the aim of giving a historical perspective, while also emphasizing more recent efforts which have sought to improve its accuracy. In addition to technical details, we discuss a range of recent applications that illustrate the necessity of including dispersion interactions in DFT. This review highlights the value of the vdW-DF method as a general-purpose method, not only for dispersion bound systems, but also in densely packed systems where these types of interactions are traditionally thought to be negligible.

582 citations


Posted Content
TL;DR: In this article, a grasp quality convolutional neural network (GQ-CNN) is trained from a synthetic dataset of 6.7 million point clouds, grasps and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table.
Abstract: To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp planner also has the highest success rate on a dataset of 10 novel rigid objects and achieves 99% precision (one false positive out of 69 grasps classified as robust) on a dataset of 40 novel household objects, some of which are articulated or deformable. Code, datasets, videos, and supplementary material are available at this http URL .

582 citations


Journal ArticleDOI
23 Feb 2018-Science
TL;DR: It is found that global patterns of fishing have surprisingly low sensitivity to short-term economic and environmental variation and a strong response to cultural and political events such as holidays and closures.
Abstract: Although fishing is one of the most widespread activities by which humans harvest natural resources, its global footprint is poorly understood and has never been directly quantified. We processed 22 billion automatic identification system messages and tracked >70,000 industrial fishing vessels from 2012 to 2016, creating a global dynamic footprint of fishing effort with spatial and temporal resolution two to three orders of magnitude higher than for previous data sets. Our data show that industrial fishing occurs in >55% of ocean area and has a spatial extent more than four times that of agriculture. We find that global patterns of fishing have surprisingly low sensitivity to short-term economic and environmental variation and a strong response to cultural and political events such as holidays and closures.

582 citations


Proceedings ArticleDOI
09 Aug 2015
TL;DR: A comparison between the results of the approach and the systems participating in the challenge on the official test sets, suggests that the model could be ranked in the first two positions in both the phrase-level subtask A and the message- level subtask B on Twitter Sentiment Analysis.
Abstract: This paper describes our deep learning system for sentiment analysis of tweets. The main contribution of this work is a new model for initializing the parameter weights of the convolutional neural network, which is crucial to train an accurate model while avoiding the need to inject any additional features. Briefly, we use an unsupervised neural language model to train initial word embeddings that are further tuned by our deep learning model on a distant supervised corpus. At a final stage, the pre-trained parameters of the network are used to initialize the model. We train the latter on the supervised training data recently made available by the official system evaluation campaign on Twitter Sentiment Analysis organized by Semeval-2015. A comparison between the results of our approach and the systems participating in the challenge on the official test sets, suggests that our model could be ranked in the first two positions in both the phrase-level subtask A (among 11 teams) and on the message-level subtask B (among 40 teams). This is an important evidence on the practical value of our solution.

Journal ArticleDOI
TL;DR: Current state-of-the-art findings considering six potential types of biological age predictors are summarized, including epigenetic clocks, telomere length, transcriptomic predictors, proteomic Predictors, metabolomics-based predictor, and composite biomarker predictors.


Posted Content
TL;DR: Wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training and outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.
Abstract: We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.

Journal ArticleDOI
31 May 2017-Nature
TL;DR: It is shown that repeat expansions create templates for multivalent base-pairing, which causes purified RNA to undergo a sol–gel transition in vitro at a similar critical repeat number as observed in the diseases.
Abstract: Expansions of short nucleotide repeats produce several neurological and neuromuscular disorders including Huntington disease, muscular dystrophy, and amyotrophic lateral sclerosis. A common pathological feature of these diseases is the accumulation of the repeat-containing transcripts into aberrant foci in the nucleus. RNA foci, as well as the disease symptoms, only manifest above a critical number of nucleotide repeats, but the molecular mechanism governing foci formation above this characteristic threshold remains unresolved. Here we show that repeat expansions create templates for multivalent base-pairing, which causes purified RNA to undergo a sol-gel transition in vitro at a similar critical repeat number as observed in the diseases. In human cells, RNA foci form by phase separation of the repeat-containing RNA and can be dissolved by agents that disrupt RNA gelation in vitro. Analogous to protein aggregation disorders, our results suggest that the sequence-specific gelation of RNAs could be a contributing factor to neurological disease.

Journal ArticleDOI
TL;DR: The manageable safety profile and promising response rates observed in this study support further investigation of nivolumab plus ipilimumab as a treatment option for this patient population.
Abstract: Importance Most patients with hepatocellular carcinoma (HCC) are diagnosed with advanced disease not eligible for potentially curative therapies; therefore, new treatment options are needed. Combining nivolumab with ipilimumab may improve clinical outcomes compared with nivolumab monotherapy. Objective To assess efficacy and safety of nivolumab plus ipilimumab in patients with advanced HCC who were previously treated with sorafenib. Design, Setting, and Participants CheckMate 040 is a multicenter, open-label, multicohort, phase 1/2 study. In the nivolumab plus ipilimumab cohort, patients were randomized between January 4 and September 26, 2016. Treatment group information was blinded after randomization. Median follow-up was 30.7 months. Data cutoff for this analysis was January 2019. Patients were recruited at 31 centers in 10 countries/territories in Asia, Europe, and North America. Eligible patients had advanced HCC (with/without hepatitis B or C) previously treated with sorafenib. A total of 148 patients were randomized (50 to arm A and 49 each to arms B and C). Interventions Patients were randomized 1:1:1 to either nivolumab 1 mg/kg plus ipilimumab 3 mg/kg, administered every 3 weeks (4 doses), followed by nivolumab 240 mg every 2 weeks (arm A); nivolumab 3 mg/kg plus ipilimumab 1 mg/kg, administered every 3 weeks (4 doses), followed by nivolumab 240 mg every 2 weeks (arm B); or nivolumab 3 mg/kg every 2 weeks plus ipilimumab 1 mg/kg every 6 weeks (arm C). Main Outcomes and Measures Coprimary end points were safety, tolerability, and objective response rate. Duration of response was also measured (investigator assessed with the Response Evaluation Criteria in Solid Tumors v1.1). Results Of 148 total participants, 120 were male (81%). Median (IQR) age was 60 (52.5-66.5). At data cutoff (January 2019), the median follow-up was 30.7 months (IQR, 29.9-34.7). Investigator-assessed objective response rate was 32% (95% CI, 20%-47%) in arm A, 27% (95% CI, 15%-41%) in arm B, and 29% (95% CI, 17%-43%) in arm C. Median (range) duration of response was not reached (8.3-33.7+) in arm A and was 15.2 months (4.2-29.9+) in arm B and 21.7 months (2.8-32.7+) in arm C. Any-grade treatment-related adverse events were reported in 46 of 49 patients (94%) in arm A, 35 of 49 patients (71%) in arm B, and 38 of 48 patients (79%) in arm C; there was 1 treatment-related death (arm A; grade 5 pneumonitis). Conclusions and Relevance In this randomized clinical trial, nivolumab plus ipilimumab had manageable safety, promising objective response rate, and durable responses. The arm A regimen (4 doses nivolumab 1 mg/kg plus ipilimumab 3 mg/kg every 3 weeks then nivolumab 240 mg every 2 weeks) received accelerated approval in the US based on the results of this study. Trial Registration ClinicalTrials.gov Identifier:NCT01658878

Proceedings ArticleDOI
18 May 2015
TL;DR: A technique based on searching for the enquiry phrases, clustering similar posts together, and then collecting related posts that do not contain these simple phrases, which ranks the clusters by their likelihood of really containing a disputed factual claim.
Abstract: Many previous techniques identify trending topics in social media, even topics that are not pre-defined We present a technique to identify trending rumors, which we define as topics that include disputed factual claims Putting aside any attempt to assess whether the rumors are true or false, it is valuable to identify trending rumors as early as possible It is extremely difficult to accurately classify whether every individual post is or is not making a disputed factual claim We are able to identify trending rumors by recasting the problem as finding entire clusters of posts whose topic is a disputed factual claim The key insight is that when there is a rumor, even though most posts do not raise questions about it, there may be a few that do If we can find signature text phrases that are used by a few people to express skepticism about factual claims and are rarely used to express anything else, we can use those as detectors for rumor clusters Indeed, we have found a few phrases that seem to be used exactly that way, including: "Is this true?", "Really?", and "What?" Relatively few posts related to any particular rumor use any of these enquiry phrases, but lots of rumor diffusion processes have some posts that do and have them quite early in the diffusion We have developed a technique based on searching for the enquiry phrases, clustering similar posts together, and then collecting related posts that do not contain these simple phrases We then rank the clusters by their likelihood of really containing a disputed factual claim The detector, which searches for the very rare but very informative phrases, combined with clustering and a classifier on the clusters, yields surprisingly good performance On a typical day of Twitter, about a third of the top 50 clusters were judged to be rumors, a high enough precision that human analysts might be willing to sift through them

Journal ArticleDOI
TL;DR: This randomized clinical trial assesses whether treatment with stereotactic ablative radiotherapy vs observation improves oncologic outcomes in men with recurrent hormone-sensitive oligometastatic prostate cancer.
Abstract: Importance Complete metastatic ablation of oligometastatic prostate cancer may provide an alternative to early initiation of androgen deprivation therapy (ADT). Objective To determine if stereotactic ablative radiotherapy (SABR) improves oncologic outcomes in men with oligometastatic prostate cancer. Design, Setting, and Participants The Observation vs Stereotactic Ablative Radiation for Oligometastatic Prostate Cancer (ORIOLE) phase 2 randomized study accrued participants from 3 US radiation treatment facilities affiliated with a university hospital from May 2016 to March 2018 with a data cutoff date of May 20, 2019, for analysis. Of 80 men screened, 54 men with recurrent hormone-sensitive prostate cancer and 1 to 3 metastases detectable by conventional imaging who had not received ADT within 6 months of enrollment or 3 or more years total were randomized. Interventions Patients were randomized in a 2:1 ratio to receive SABR or observation. Main Outcomes and Measures The primary outcome was progression at 6 months by prostate-specific antigen level increase, progression detected by conventional imaging, symptomatic progression, ADT initiation for any reason, or death. Predefined secondary outcomes were toxic effects of SABR, local control at 6 months with SABR, progression-free survival, Brief Pain Inventory (Short Form)–measured quality of life, and concordance between conventional imaging and prostate-specific membrane antigen (PSMA)–targeted positron emission tomography in the identification of metastatic disease. Results In the 54 men randomized, the median (range) age was 68 (61-70) years for patients allocated to SABR and 68 (64-76) years for those allocated to observation. Progression at 6 months occurred in 7 of 36 patients (19%) receiving SABR and 11 of 18 patients (61%) undergoing observation (P = .005). Treatment with SABR improved median progression-free survival (not reached vs 5.8 months; hazard ratio, 0.30; 95% CI, 0.11-0.81;P = .002). Total consolidation of PSMA radiotracer-avid disease decreased the risk of new lesions at 6 months (16% vs 63%;P = .006). No toxic effects of grade 3 or greater were observed. T-cell receptor sequencing identified significant increased clonotypic expansion following SABR and correlation between baseline clonality and progression with SABR only (0.082085 vs 0.026051;P = .03). Conclusions and Relevance Treatment with SABR for oligometastatic prostate cancer improved outcomes and was enhanced by total consolidation of disease identified by PSMA-targeted positron emission tomography. SABR induced a systemic immune response, and baseline immune phenotype and tumor mutation status may predict the benefit from SABR. These results underline the importance of prospective randomized investigation of the oligometastatic state with integrated imaging and biological correlates. Trial Registration ClinicalTrials.gov Identifier:NCT02680587

Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively, which achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
Abstract: We study on image super-resolution (SR), which aims to recover realistic textures from a low-resolution (LR) image. Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively. TTSR consists of four closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a soft-attention module for texture synthesis. Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1x to 4x magnification). Extensive experiments show that TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.

Journal ArticleDOI
TL;DR: The U.S. Environmental Protection Agency’s web-based CompTox Chemistry Dashboard is addressing needs by integrating diverse types of relevant domain data through a cheminformatics layer, built upon a database of curated substances linked to chemical structures.
Abstract: Despite an abundance of online databases providing access to chemical data, there is increasing demand for high-quality, structure-curated, open data to meet the various needs of the environmental sciences and computational toxicology communities. The U.S. Environmental Protection Agency’s (EPA) web-based CompTox Chemistry Dashboard is addressing these needs by integrating diverse types of relevant domain data through a cheminformatics layer, built upon a database of curated substances linked to chemical structures. These data include physicochemical, environmental fate and transport, exposure, usage, in vivo toxicity, and in vitro bioassay data, surfaced through an integration hub with link-outs to additional EPA data and public domain online resources. Batch searching allows for direct chemical identifier (ID) mapping and downloading of multiple data streams in several different formats. This facilitates fast access to available structure, property, toxicity, and bioassay data for collections of chemicals (hundreds to thousands at a time). Advanced search capabilities are available to support, for example, non-targeted analysis and identification of chemicals using mass spectrometry. The contents of the chemistry database, presently containing ~ 760,000 substances, are available as public domain data for download. The chemistry content underpinning the Dashboard has been aggregated over the past 15 years by both manual and auto-curation techniques within EPA’s DSSTox project. DSSTox chemical content is subject to strict quality controls to enforce consistency among chemical substance-structure identifiers, as well as list curation review to ensure accurate linkages of DSSTox substances to chemical lists and associated data. The Dashboard, publicly launched in April 2016, has expanded considerably in content and user traffic over the past year. It is continuously evolving with the growth of DSSTox into high-interest or data-rich domains of interest to EPA, such as chemicals on the Toxic Substances Control Act listing, while providing the user community with a flexible and dynamic web-based platform for integration, processing, visualization and delivery of data and resources. The Dashboard provides support for a broad array of research and regulatory programs across the worldwide community of toxicologists and environmental scientists.

Journal ArticleDOI
Amina Helmi1, F. van Leeuwen2, Paul J. McMillan3, Davide Massari1  +481 moreInstitutions (82)
TL;DR: In this paper, the second data release of the Gaia mission and its power for constraining many different aspects of the dynamics of the satellites of the Milky Way is demonstrated. But the accuracy of the errors, statistical and systematic, are relatively well understood.
Abstract: Context. Aims: The goal of this paper is to demonstrate the outstanding quality of the second data release of the Gaia mission and its power for constraining many different aspects of the dynamics of the satellites of the Milky Way. We focus here on determining the proper motions of 75 Galactic globular clusters, nine dwarf spheroidal galaxies, one ultra-faint system, and the Large and Small Magellanic Clouds. Methods: Using data extracted from the Gaia archive, we derived the proper motions and parallaxes for these systems, as well as their uncertainties. We demonstrate that the errors, statistical and systematic, are relatively well understood. We integrated the orbits of these objects in three different Galactic potentials, and characterised their properties. We present the derived proper motions, space velocities, and characteristic orbital parameters in various tables to facilitate their use by the astronomical community. Results: Our limited and straightforward analyses have allowed us for example to (i) determine absolute and very precise proper motions for globular clusters; (ii) detect clear rotation signatures in the proper motions of at least five globular clusters; (iii) show that the satellites of the Milky Way are all on high-inclination orbits, but that they do not share a single plane of motion; (iv) derive a lower limit for the mass of the Milky Way of 9.1-2.6+6.2 × 1011 M⊙ based on the assumption that the Leo I dwarf spheroidal is bound; (v) derive a rotation curve for the Large Magellanic Cloud based solely on proper motions that is competitive with line-of-sight velocity curves, now using many orders of magnitude more sources; and (vi) unveil the dynamical effect of the bar on the motions of stars in the Large Magellanic Cloud. Conclusions: All these results highlight the incredible power of the Gaia astrometric mission, and in particular of its second data release.

Journal ArticleDOI
TL;DR: It is shown, by performing molecular dynamics simulations, that a nanopore in a single-layer molybdenum disulfide can effectively reject ions and allow transport of water at a high rate.
Abstract: Efficient desalination of water continues to be a problem facing the society. Advances in nanotechnology have led to the development of a variety of nanoporous membranes for water purification. Here we show, by performing molecular dynamics simulations, that a nanopore in a single-layer molybdenum disulfide can effectively reject ions and allow transport of water at a high rate. More than 88% of ions are rejected by membranes having pore areas ranging from 20 to 60 A(2). Water flux is found to be two to five orders of magnitude greater than that of other known nanoporous membranes. Pore chemistry is shown to play a significant role in modulating the water flux. Pores with only molybdenum atoms on their edges lead to higher fluxes, which are ∼ 70% greater than that of graphene nanopores. These observations are explained by permeation coefficients, energy barriers, water density and velocity distributions in the pores.

Journal ArticleDOI
TL;DR: In this article, a robust yield gap analysis for 10 countries in sub-Saharan Africa using location-specific data and a spatial upscaling approach reveals that, in addition to yield gap closure, other more complex and uncertain components of intensification are also needed, i.e., increasing cropping intensity and sustainable expansion of irrigated production area.
Abstract: Although global food demand is expected to increase 60% by 2050 compared with 2005/2007, the rise will be much greater in sub-Saharan Africa (SSA). Indeed, SSA is the region at greatest food security risk because by 2050 its population will increase 2.5-fold and demand for cereals approximately triple, whereas current levels of cereal consumption already depend on substantial imports. At issue is whether SSA can meet this vast increase in cereal demand without greater reliance on cereal imports or major expansion of agricultural area and associated biodiversity loss and greenhouse gas emissions. Recent studies indicate that the global increase in food demand by 2050 can be met through closing the gap between current farm yield and yield potential on existing cropland. Here, however, we estimate it will not be feasible to meet future SSA cereal demand on existing production area by yield gap closure alone. Our agronomically robust yield gap analysis for 10 countries in SSA using location-specific data and a spatial upscaling approach reveals that, in addition to yield gap closure, other more complex and uncertain components of intensification are also needed, i.e., increasing cropping intensity (the number of crops grown per 12 mo on the same field) and sustainable expansion of irrigated production area. If intensification is not successful and massive cropland land expansion is to be avoided, SSA will depend much more on imports of cereals than it does today.

Journal ArticleDOI
TL;DR: This paper proposes a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner and incorporates deep convolutional neural networks into AL.
Abstract: Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256.

Posted Content
Qiang Liu1, Dilin Wang1
TL;DR: This paper proposed a variational inference algorithm that forms a natural counterpart of gradient descent for optimization, which iteratively transports a set of particles to match the target distribution by applying a form of functional gradient descent that minimizes the KL divergence.
Abstract: We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein's identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.

Posted Content
TL;DR: This article presented a statistical language generator based on a semantically controlled Long Short-Term Memory (LSTM) structure, which can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates.
Abstract: Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.

Journal ArticleDOI
TL;DR: Among patients with locoregional clear-cell renal-cell carcinoma at high risk for tumor recurrence after nephrectomy, the median duration of disease-free survival was significantly longer in the sunitinib group than in the placebo group, at a cost of a higher rate of toxic events.
Abstract: BackgroundSunitinib, a vascular endothelial growth factor pathway inhibitor, is an effective treatment for metastatic renal-cell carcinoma. We sought to determine the efficacy and safety of sunitinib in patients with locoregional renal-cell carcinoma at high risk for tumor recurrence after nephrectomy. MethodsIn this randomized, double-blind, phase 3 trial, we assigned 615 patients with locoregional, high-risk clear-cell renal-cell carcinoma to receive either sunitinib (50 mg per day) or placebo on a 4-weeks-on, 2-weeks-off schedule for 1 year or until disease recurrence, unacceptable toxicity, or consent withdrawal. The primary end point was disease-free survival, according to blinded independent central review. Secondary end points included investigator-assessed disease-free survival, overall survival, and safety. ResultsThe median duration of disease-free survival was 6.8 years (95% confidence interval [CI], 5.8 to not reached) in the sunitinib group and 5.6 years (95% CI, 3.8 to 6.6) in the placebo gr...

Journal ArticleDOI
TL;DR: In this article, the impact of reaction rate on catalytic behavior and the operation of gas-diffusion layers for electrocatalytic CO2 reduction is discussed. But, the authors focus on high current density (∼200 mA cm−2) and do not evaluate the performance of these catalysts.
Abstract: Electrocatalytic CO2 reduction has the dual-promise of neutralizing carbon emissions in the near future, while providing a long-term pathway to create energy-dense chemicals and fuels from atmospheric CO2. The field has advanced immensely in recent years, taking significant strides towards commercial realization. Catalyst innovations have played a pivotal role in these advances, with a steady stream of new catalysts providing gains in CO2 conversion efficiencies and selectivities of both C1 and C2 products. Comparatively few of these catalysts have been tested at commercially-relevant current densities (∼200 mA cm−2) due to transport limitations in traditional testing configurations and a research focus on fundamental catalyst kinetics, which are measured at substantially lower current densities. A catalyst's selectivity and activity, however, have been shown to be highly sensitive to the local reaction environment, which changes drastically as a function of reaction rate. As a consequence of this, the surface properties of many CO2 reduction catalysts risk being optimized for the wrong operating conditions. The goal of this perspective is to communicate the substantial impact of reaction rate on catalytic behaviour and the operation of gas-diffusion layers for the CO2 reduction reaction. In brief, this work motivates high current density catalyst testing as a necessary step to properly evaluate materials for electrochemical CO2 reduction, and to accelerate the technology toward its envisioned application of neutralizing CO2 emissions on a global scale.

Proceedings Article
01 Jan 2017
TL;DR: This work discusses the implementation of PixelCNNs, a recently proposed class of powerful generative models with tractable likelihood that contains a number of modifications to the original model that both simplify its structure and improve its performance.
Abstract: PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at this https URL. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.

Journal ArticleDOI
TL;DR: This consensus paper aims to improve awareness of the need for early detection and management of FH children by recommending cascade screening of families using a combined phenotypic and genotypic strategy.
Abstract: Familial hypercholesterolaemia (FH) is a common genetic cause of premature coronary heart disease (CHD). Globally, one baby is born with FH every minute. If diagnosed and treated early in childhood, individuals with FH can have normal life expectancy. This consensus paper aims to improve awareness of the need for early detection and management of FH children. Familial hypercholesterolaemia is diagnosed either on phenotypic criteria, i.e. an elevated low-density lipoprotein cholesterol (LDL-C) level plus a family history of elevated LDL-C, premature coronary artery disease and/or genetic diagnosis, or positive genetic testing. Childhood is the optimal period for discrimination between FH and non-FH using LDL-C screening. An LDL-C ≥5 mmol/L (190 mg/dL), or an LDL-C ≥4 mmol/L (160 mg/dL) with family history of premature CHD and/or high baseline cholesterol in one parent, make the phenotypic diagnosis. If a parent has a genetic defect, the LDL-C cut-off for the child is ≥3.5 mmol/L (130 mg/dL). We recommend cascade screening of families using a combined phenotypic and genotypic strategy. In children, testing is recommended from age 5 years, or earlier if homozygous FH is suspected. A healthy lifestyle and statin treatment (from age 8 to 10 years) are the cornerstones of management of heterozygous FH. Target LDL-C is 10 years, or ideally 50% reduction from baseline if 8–10 years, especially with very high LDL-C, elevated lipoprotein(a), a family history of premature CHD or other cardiovascular risk factors, balanced against the long-term risk of treatment side effects. Identifying FH early and optimally lowering LDL-C over the lifespan reduces cumulative LDL-C burden and offers health and socioeconomic benefits. To drive policy change for timely detection and management, we call for further studies in the young. Increased awareness, early identification, and optimal treatment from childhood are critical to adding decades of healthy life for children and adolescents with FH.