scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
Xiuqiang Li1, Weichao Xu1, Mingyao Tang1, Lin Zhou1, Bin Zhu1, Shining Zhu1, Jia Zhu1 
TL;DR: The energy transfer efficiency of this foldable graphene oxide film-based device fabricated by a scalable process is independent of water quantity and can be achieved without optical or thermal supporting systems, therefore significantly improving the scalability and feasibility of this technology toward a complementary portable and personalized water solution.
Abstract: Because it is able to produce desalinated water directly using solar energy with minimum carbon footprint, solar steam generation and desalination is considered one of the most important technologies to address the increasingly pressing global water scarcity. Despite tremendous progress in the past few years, efficient solar steam generation and desalination can only be achieved for rather limited water quantity with the assistance of concentrators and thermal insulation, not feasible for large-scale applications. The fundamental paradox is that the conventional design of direct absorber−bulk water contact ensures efficient energy transfer and water supply but also has intrinsic thermal loss through bulk water. Here, enabled by a confined 2D water path, we report an efficient (80% under one-sun illumination) and effective (four orders salinity decrement) solar desalination device. More strikingly, because of minimized heat loss, high efficiency of solar desalination is independent of the water quantity and can be maintained without thermal insulation of the container. A foldable graphene oxide film, fabricated by a scalable process, serves as efficient solar absorbers (>94%), vapor channels, and thermal insulators. With unique structure designs fabricated by scalable processes and high and stable efficiency achieved under normal solar illumination independent of water quantity without any supporting systems, our device represents a concrete step for solar desalination to emerge as a complementary portable and personalized clean water solution.

888 citations


Journal ArticleDOI
TL;DR: This work demonstrates highly efficient and stable solar cells using a ternary approach, wherein two non-fullerene acceptors are combined with both a scalable and affordable donor polymer, poly(3-hexylthiophene) (P3HT), and a high-efficiency, low-bandgap polymer in a single-layer bulk-heterojunction device.
Abstract: Technological deployment of organic photovoltaic modules requires improvements in device light-conversion efficiency and stability while keeping material costs low. Here we demonstrate highly efficient and stable solar cells using a ternary approach, wherein two non-fullerene acceptors are combined with both a scalable and affordable donor polymer, poly(3-hexylthiophene) (P3HT), and a high-efficiency, low-bandgap polymer in a single-layer bulk-heterojunction device. The addition of a strongly absorbing small molecule acceptor into a P3HT-based non-fullerene blend increases the device efficiency up to 7.7 ± 0.1% without any solvent additives. The improvement is assigned to changes in microstructure that reduce charge recombination and increase the photovoltage, and to improved light harvesting across the visible region. The stability of P3HT-based devices in ambient conditions is also significantly improved relative to polymer:fullerene devices. Combined with a low-bandgap donor polymer (PBDTTT-EFT, also known as PCE10), the two mixed acceptors also lead to solar cells with 11.0 ± 0.4% efficiency and a high open-circuit voltage of 1.03 ± 0.01 V. Ternary organic blends using two non-fullerene acceptors are shown to improve the efficiency and stability of low-cost solar cells based on P3HT and of high-performance photovoltaic devices based on low-bandgap donor polymers.

887 citations


Posted Content
TL;DR: This survey presents a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets, and future research directions for fake news detection on socialMedia.
Abstract: Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of "fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.

887 citations


Posted Content
TL;DR: This paper proposes a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments.
Abstract: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.

887 citations


Journal ArticleDOI
TL;DR: Th thin films of nanosized metal-organic frameworks (MOFs) are introduced as atomically defined and nanoscopic materials that function as catalysts for the efficient and selective reduction of carbon dioxide to carbon monoxide in aqueous electrolytes.
Abstract: A key challenge in the field of electrochemical carbon dioxide reduction is the design of catalytic materials featuring high product selectivity, stability, and a composition of earth-abundant elements. In this work, we introduce thin films of nanosized metal-organic frameworks (MOFs) as atomically defined and nanoscopic materials that function as catalysts for the efficient and selective reduction of carbon dioxide to carbon monoxide in aqueous electrolytes. Detailed examination of a cobalt-porphyrin MOF, Al2(OH)2TCPP-Co (TCPP-H2 = 4,4',4″,4‴-(porphyrin-5,10,15,20-tetrayl)tetrabenzoate) revealed a selectivity for CO production in excess of 76% and stability over 7 h with a per-site turnover number (TON) of 1400. In situ spectroelectrochemical measurements provided insights into the cobalt oxidation state during the course of reaction and showed that the majority of catalytic centers in this MOF are redox-accessible where Co(II) is reduced to Co(I) during catalysis.

887 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: SURREAL as mentioned in this paper ) is a large-scale dataset with synthetically generated but realistic images of people rendered from 3D sequences of human motion capture data, which allows for accurate human depth estimation and human part segmentation in real RGB images.
Abstract: Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.

887 citations


Journal ArticleDOI
TL;DR: The present commentary responds to the growing interest in ultra-processed foods among policy makers, academic researchers, health professionals, journalists and consumers concerned to devise policies, investigate dietary patterns, advise people, prepare media coverage, and when buying food and checking labels in shops or at home.
Abstract: The present commentary contains a clear and simple guide designed to identify ultra-processed foods. It responds to the growing interest in ultra-processed foods among policy makers, academic researchers, health professionals, journalists and consumers concerned to devise policies, investigate dietary patterns, advise people, prepare media coverage, and when buying food and checking labels in shops or at home. Ultra-processed foods are defined within the NOVA classification system, which groups foods according to the extent and purpose of industrial processing. Processes enabling the manufacture of ultra-processed foods include the fractioning of whole foods into substances, chemical modifications of these substances, assembly of unmodified and modified food substances, frequent use of cosmetic additives and sophisticated packaging. Processes and ingredients used to manufacture ultra-processed foods are designed to create highly profitable (low-cost ingredients, long shelf-life, emphatic branding), convenient (ready-to-consume), hyper-palatable products liable to displace all other NOVA food groups, notably unprocessed or minimally processed foods. A practical way to identify an ultra-processed product is to check to see if its list of ingredients contains at least one item characteristic of the NOVA ultra-processed food group, which is to say, either food substances never or rarely used in kitchens (such as high-fructose corn syrup, hydrogenated or interesterified oils, and hydrolysed proteins), or classes of additives designed to make the final product palatable or more appealing (such as flavours, flavour enhancers, colours, emulsifiers, emulsifying salts, sweeteners, thickeners, and anti-foaming, bulking, carbonating, foaming, gelling and glazing agents).

887 citations


Journal ArticleDOI
03 Jul 2015-Science
TL;DR: In this paper, a topological insulator is characterized by a dichotomy between the interior and the edge of a finite system: the bulk has an energy gap, and the edges sustain excitations traversing this gap.
Abstract: A topological insulator, as originally proposed for electrons governed by quantum mechanics, is characterized by a dichotomy between the interior and the edge of a finite system: The bulk has an energy gap, and the edges sustain excitations traversing this gap. However, it has remained an open question whether the same physics can be observed for systems obeying Newton’s equations of motion. We conducted experiments to characterize the collective behavior of mechanical oscillators exhibiting the phenomenology of the quantum spin Hall effect. The phononic edge modes are shown to be helical, and we demonstrate their topological protection via the stability of the edge states against imperfections. Our results may enable the design of topological acoustic metamaterials that can capitalize on the stability of the surface phonons as reliable wave guides.

887 citations


Journal ArticleDOI
TL;DR: In this article, an equiatomic CoCrFeMnNi high-entropy alloy (HEA), produced by arc melting and drop casting, was subjected to severe plastic deformation (SPD) using high pressure torsion.

887 citations


Journal ArticleDOI
TL;DR: In this paper, an overview is given of some of the most important techniques available to tackle the dynamics of an OQS beyond the Markov approximation, which requires a large separation of system and environment time scales.
Abstract: Open quantum systems (OQSs) cannot always be described with the Markov approximation, which requires a large separation of system and environment time scales. An overview is given of some of the most important techniques available to tackle the dynamics of an OQS beyond the Markov approximation. Some of these techniques, such as master equations, Heisenberg equations, and stochastic methods, are based on solving the reduced OQS dynamics, while others, such as path integral Monte Carlo or chain mapping approaches, are based on solving the dynamics of the full system. The physical interpretation and derivation of the various approaches are emphasized, how they are connected is explored, and how different methods may be suitable for solving different problems is examined.

887 citations


Journal ArticleDOI
TL;DR: A metagenome-wide association study on stools from individuals with atherosclerotic cardiovascular disease and healthy controls is performed, identifying microbial strains and functions associated with the disease.
Abstract: The gut microbiota has been linked to cardiovascular diseases. However, the composition and functional capacity of the gut microbiome in relation to cardiovascular diseases have not been systematically examined. Here, we perform a metagenome-wide association study on stools from 218 individuals with atherosclerotic cardiovascular disease (ACVD) and 187 healthy controls. The ACVD gut microbiome deviates from the healthy status by increased abundance of Enterobacteriaceae and Streptococcus spp. and, functionally, in the potential for metabolism or transport of several molecules important for cardiovascular health. Although drug treatment represents a confounding factor, ACVD status, and not current drug use, is the major distinguishing feature in this cohort. We identify common themes by comparison with gut microbiome data associated with other cardiometabolic diseases (obesity and type 2 diabetes), with liver cirrhosis, and rheumatoid arthritis. Our data represent a comprehensive resource for further investigations on the role of the gut microbiome in promoting or preventing ACVD as well as other related diseases. The gut microbiota may play a role in cardiovascular diseases. Here, the authors perform a metagenome-wide association study on stools from individuals with atherosclerotic cardiovascular disease and healthy controls, identifying microbial strains and functions associated with the disease.

Proceedings Article
25 Jan 2015
TL;DR: A new non-parametric calibration method called Bayesian Binning into Quantiles (BBQ) is presented which addresses key limitations of existing calibration methods and can be readily combined with many existing classification algorithms.
Abstract: Learning probabilistic predictive models that are well calibrated is critical for many prediction and decision-making tasks in artificial intelligence. In this paper we present a new non-parametric calibration method called Bayesian Binning into Quantiles (BBQ) which addresses key limitations of existing calibration methods. The method post processes the output of a binary classification algorithm; thus, it can be readily combined with many existing classification algorithms. The method is computationally tractable, and empirically accurate, as evidenced by the set of experiments reported here on both real and simulated datasets.

Journal ArticleDOI
01 Oct 2015-Gut
TL;DR: This review highlights issues to consider when implementing a CRC screening programme and gives a worldwide overview of CRC burden and the current status of screening programmes, with focus on international differences.
Abstract: Colorectal cancer (CRC) ranks third among the most commonly diagnosed cancers worldwide, with wide geographical variation in incidence and mortality across the world. Despite proof that screening can decrease CRC incidence and mortality, CRC screening is only offered to a small proportion of the target population worldwide. Throughout the world there are widespread differences in CRC screening implementation status and strategy. Differences can be attributed to geographical variation in CRC incidence, economic resources, healthcare structure and infrastructure to support screening such as the ability to identify the target population at risk and cancer registry availability. This review highlights issues to consider when implementing a CRC screening programme and gives a worldwide overview of CRC burden and the current status of screening programmes, with focus on international differences.

Journal ArticleDOI
TL;DR: In this paper, the effects of ecological memory on post-disturbance dynamics imply that contingencies (effects that cannot be predicted with certainty) of individual disturbances, interactions among disturbances, and climate variability combine to affect ecosystem resilience.
Abstract: Ecological memory is central to how ecosystems respond to disturbance and is maintained by two types of legacies – information and material. Species life-history traits represent an adaptive response to disturbance and are an information legacy; in contrast, the abiotic and biotic structures (such as seeds or nutrients) produced by single disturbance events are material legacies. Disturbance characteristics that support or maintain these legacies enhance ecological resilience and maintain a “safe operating space” for ecosystem recovery. However, legacies can be lost or diminished as disturbance regimes and environmental conditions change, generating a “resilience debt” that manifests only after the system is disturbed. Strong effects of ecological memory on post-disturbance dynamics imply that contingencies (effects that cannot be predicted with certainty) of individual disturbances, interactions among disturbances, and climate variability combine to affect ecosystem resilience. We illustrate these concepts and introduce a novel ecosystem resilience framework with examples of forest disturbances, primarily from North America. Identifying legacies that support resilience in a particular ecosystem can help scientists and resource managers anticipate when disturbances may trigger abrupt shifts in forest ecosystems, and when forests are likely to be resilient.

Journal ArticleDOI
25 Feb 2019-Nature
TL;DR: Results suggest that the origin of the observed effects is interlayer excitons trapped in a smooth moiré potential with inherited valley-contrasting physics, and presents opportunities to control two-dimensional moirÉ optics through variation of the twist angle.
Abstract: The formation of moire patterns in crystalline solids can be used to manipulate their electronic properties, which are fundamentally influenced by periodic potential landscapes. In two-dimensional materials, a moire pattern with a superlattice potential can be formed by vertically stacking two layered materials with a twist and/or a difference in lattice constant. This approach has led to electronic phenomena including the fractal quantum Hall effect1–3, tunable Mott insulators4,5 and unconventional superconductivity6. In addition, theory predicts that notable effects on optical excitations could result from a moire potential in two-dimensional valley semiconductors7–9, but these signatures have not been detected experimentally. Here we report experimental evidence of interlayer valley excitons trapped in a moire potential in molybdenum diselenide (MoSe2)/tungsten diselenide (WSe2) heterobilayers. At low temperatures, we observe photoluminescence close to the free interlayer exciton energy but with linewidths over one hundred times narrower (around 100 microelectronvolts). The emitter g-factors are homogeneous across the same sample and take only two values, −15.9 and 6.7, in samples with approximate twist angles of 60 degrees and 0 degrees, respectively. The g-factors match those of the free interlayer exciton, which is determined by one of two possible valley-pairing configurations. At twist angles of approximately 20 degrees the emitters become two orders of magnitude dimmer; however, they possess the same g-factor as the heterobilayer at a twist angle of approximately 60 degrees. This is consistent with the umklapp recombination of interlayer excitons near the commensurate 21.8-degree twist angle7. The emitters exhibit strong circular polarization of the same helicity for a given twist angle, which suggests that the trapping potential retains three-fold rotational symmetry. Together with a characteristic dependence on power and excitation energy, these results suggest that the origin of the observed effects is interlayer excitons trapped in a smooth moire potential with inherited valley-contrasting physics. This work presents opportunities to control two-dimensional moire optics through variation of the twist angle. The trapping of interlayer valley excitons in a moire potential formed by a molybdenum diselenide/tungsten diselenide heterobilayer with twist angle control is reported.

Journal ArticleDOI
TL;DR: A decade-long multinational ‘catch reconstruction’ project covering the Exclusive Economic Zones of the world's maritime countries and the High Seas from 1950 to 2010, and accounting for all fisheries, suggests that catch actually peaked at 130 million tonnes, and has been declining much more strongly since.
Abstract: Fisheries data assembled by the Food and Agriculture Organization (FAO) suggest that global marine fisheries catches increased to 86 million tonnes in 1996, then slightly declined. Here, using a decade-long multinational ‘catch reconstruction' project covering the Exclusive Economic Zones of the world's maritime countries and the High Seas from 1950 to 2010, and accounting for all fisheries, we identify catch trajectories differing considerably from the national data submitted to the FAO. We suggest that catch actually peaked at 130 million tonnes, and has been declining much more strongly since. This decline in reconstructed catches reflects declines in industrial catches and to a smaller extent declining discards, despite industrial fishing having expanded from industrialized countries to the waters of developing countries. The differing trajectories documented here suggest a need for improved monitoring of all fisheries, including often neglected small-scale fisheries, and illegal and other problematic fisheries, as well as discarded bycatch.

Journal ArticleDOI
TL;DR: In patients with CKD and type 2 diabetes, treatment with finerenone resulted in lower risks of CKD progression and cardiovascular events than placebo, and the frequency of adverse events was similar in the two groups.
Abstract: Background Finerenone, a nonsteroidal, selective mineralocorticoid receptor antagonist, reduced albuminuria in short-term trials involving patients with chronic kidney disease (CKD) and ty...

Journal ArticleDOI
TL;DR: Germline mutations in cancer-predisposing genes were identified in 8.5% of the children and adolescents with cancer, and family history did not predict the presence of an underlying predisposition syndrome in most patients.
Abstract: BackgroundThe prevalence and spectrum of predisposing mutations among children and adolescents with cancer are largely unknown. Knowledge of such mutations may improve the understanding of tumorigenesis, direct patient care, and enable genetic counseling of patients and families. MethodsIn 1120 patients younger than 20 years of age, we sequenced the whole genomes (in 595 patients), whole exomes (in 456), or both (in 69). We analyzed the DNA sequences of 565 genes, including 60 that have been associated with autosomal dominant cancer-predisposition syndromes, for the presence of germline mutations. The pathogenicity of the mutations was determined by a panel of medical experts with the use of cancer-specific and locus-specific genetic databases, the medical literature, computational predictions, and second hits identified in the tumor genome. The same approach was used to analyze data from 966 persons who did not have known cancer in the 1000 Genomes Project, and a similar approach was used to analyze data...

Proceedings ArticleDOI
04 Apr 2017
TL;DR: Kd-Net as discussed by the authors performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees, which is designed for 3D model recognition tasks and works with unstructured point clouds.
Abstract: We present a new deep learning architecture (called Kdnetwork) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform twodimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behavior. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.

Journal ArticleDOI
30 Jan 2020-Science
TL;DR: Hidden fluid mechanics (HFM), a physics-informed deep-learning framework capable of encoding the Navier-Stokes equations into the neural networks while being agnostic to the geometry or the initial and boundary conditions, is developed.
Abstract: For centuries, flow visualization has been the art of making fluid motion visible in physical and biological systems. Although such flow patterns can be, in principle, described by the Navier-Stokes equations, extracting the velocity and pressure fields directly from the images is challenging. We addressed this problem by developing hidden fluid mechanics (HFM), a physics-informed deep-learning framework capable of encoding the Navier-Stokes equations into the neural networks while being agnostic to the geometry or the initial and boundary conditions. We demonstrate HFM for several physical and biomedical problems by extracting quantitative information for which direct measurements may not be possible. HFM is robust to low resolution and substantial noise in the observation data, which is important for potential applications.

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed a 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets.
Abstract: We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.

Journal ArticleDOI
08 Apr 2016-Science
TL;DR: Research focused on elucidating the mechanisms by which the hypoxic tumor microenvironment promotes metastatic progression is reviewed to identify potential biomarkers and therapeutic targets regulated by hypoxia that could be incorporated into strategies aimed at preventing and treating metastatic disease.
Abstract: Metastatic disease is the leading cause of cancer-related deaths and involves critical interactions between tumor cells and the microenvironment. Hypoxia is a potent microenvironmental factor promoting metastatic progression. Clinically, hypoxia and the expression of the hypoxia-inducible transcription factors HIF-1 and HIF-2 are associated with increased distant metastasis and poor survival in a variety of tumor types. Moreover, HIF signaling in malignant cells influences multiple steps within the metastatic cascade. Here we review research focused on elucidating the mechanisms by which the hypoxic tumor microenvironment promotes metastatic progression. These studies have identified potential biomarkers and therapeutic targets regulated by hypoxia that could be incorporated into strategies aimed at preventing and treating metastatic disease.

Journal ArticleDOI
TL;DR: FastME as discussed by the authors is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms.
Abstract: FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/).

Posted Content
TL;DR: In this paper, the authors introduce the notion of an effective receptive field, and show that it both has a Gaussian distribution and only occupies a fraction of the full theoretical receptive field.
Abstract: We study characteristics of receptive fields of units in deep convolutional networks. The receptive field size is a crucial issue in many visual tasks, as the output must respond to large enough areas in the image to capture information about large objects. We introduce the notion of an effective receptive field, and show that it both has a Gaussian distribution and only occupies a fraction of the full theoretical receptive field. We analyze the effective receptive field in several architecture designs, and the effect of nonlinear activations, dropout, sub-sampling and skip connections on it. This leads to suggestions for ways to address its tendency to be too small.

Journal ArticleDOI
TL;DR: Dynamic Nested Sampling as discussed by the authors adaptively allocating samples based on posterior structure, which has the benefits of Markov Chain Monte Carlo algorithms that focus exclusively on posterior estimation while retaining nested sampling's ability to estimate evidences and sample from complex, multi-modal distributions.
Abstract: We present dynesty, a public, open-source, Python package to estimate Bayesian posteriors and evidences (marginal likelihoods) using Dynamic Nested Sampling. By adaptively allocating samples based on posterior structure, Dynamic Nested Sampling has the benefits of Markov Chain Monte Carlo algorithms that focus exclusively on posterior estimation while retaining Nested Sampling's ability to estimate evidences and sample from complex, multi-modal distributions. We provide an overview of Nested Sampling, its extension to Dynamic Nested Sampling, the algorithmic challenges involved, and the various approaches taken to solve them. We then examine dynesty's performance on a variety of toy problems along with several astronomical applications. We find in particular problems dynesty can provide substantial improvements in sampling efficiency compared to popular MCMC approaches in the astronomical literature. More detailed statistical results related to Nested Sampling are also included in the Appendix.

15 Jan 2015
TL;DR: Birth rates declined for women in their 20s and increased for most age groups of women aged 30 and over in 2013, and measures of unmarried childbearing were down in 2013 from 2012.
Abstract: Objectives This report presents 2013 data on US births according to a wide variety of characteristics Data are presented for maternal age, live-birth order, race and Hispanic origin, marital status, attendant at birth, method of delivery, period of gestation, birthweight, and plurality Birth and fertility rates are presented by age, live-birth order, race and Hispanic origin, and marital status Selected data by mother's state of residence and birth rates by age and race of father also are shown Trends in fertility patterns and maternal and infant characteristics are described and interpreted Methods Descriptive tabulations of data reported on the birth certificates of the 393 million US births that occurred in 2013 are presented Results A total of 3,932,181 births were registered in the United States in 2013, down less than 1% from 2012 The general fertility rate declined to 625 per 1,000 women aged 15-44 The teen birth rate fell 10%, to 265 per 1,000 women aged 15-19 Birth rates declined for women in their 20s and increased for most age groups of women aged 30 and over The total fertility rate (estimated number of births over a woman's lifetime) declined 1% to 1,8575 per 1,000 women Measures of unmarried childbearing were down in 2013 from 2012 The cesarean delivery rate declined to 327% The preterm birth rate declined for the seventh straight year to 1139%, but the low birthweight rate was essentially unchanged at 802% The twin birth rate rose 2% to 337 per 1,000 births; the triplet and higher-order multiple birth rate dropped 4% to 1195 per 100,000 total births

Journal ArticleDOI
05 Mar 2019-JAMA
TL;DR: This initiative will leverage critical scientific advances in HIV prevention, diagnosis, treatment, and care by coordinating the highly successful programs, resources, and infrastructure of the CDC, the National Institutes of Health, the Health Resources and Services Administration, the Substance Abuse and Mental Health Services Administration (SAMHSA), and the Indian Health Service (IHS).
Abstract: In the State of the Union Address on February 5, 2019, President Donald J. Trump announced his administration’s goal to end the HIV epidemic in the United States within 10 years. The president’s budget will ask Republicans and Democrats to make the needed commitment to support a concrete plan to achieve this goal. While landmark biomedical and scientific research advances have led to the development of many successful HIV treatment regimens, prevention strategies, and improved care for persons with HIV, the HIV pandemic remains a public health crisis in the United States and globally. In the United States, more than 700 000 people have died as a result of HIV/AIDS since the disease was first recognized in 1981, and the Centers for Disease Control and Prevention (CDC) estimates that 1.1 million people are currently living with HIV, about 15% of whom are unaware of their HIV infection.1 Approximately 23% of new infections are transmitted by individuals who are unaware of their infection and approximately 69% of new infections are transmitted by those who are diagnosed with HIV infection but who are not in care.2 In 2017, more than 38 000 people were diagnosed with HIV in the United States. The majority of these cases were among young black/African American and Hispanic/Latino men who have sex with men (MSM). In addition, there was high incidence of HIV among transgender individuals, high-risk heterosexuals, and persons who inject drugs.1 This public health issue is also connected to the broader opioid crisis: 2015 marked the first time in 2 decades that the number of HIV cases attributed to drug injection increased.3 Of particular note, more than half of the new HIV diagnoses were reported in southern states and Washington, DC. During 2016 and 2017, of the 3007 counties in the United States, half of new HIV diagnoses were concentrated in 48 “hotspot” counties, Washington, DC, and Puerto Rico.4 The US Department of Health and Human Services (HHS) has proposed a new initiative to address this ongoing public health crisis with the goals of first reducing numbers of incident infections in the United States by 75% within 5 years, and then by 90% within 10 years. This initiative will leverage critical scientific advances in HIV prevention, diagnosis, treatment, and care by coordinating the highly successful programs, resources, and infrastructure of the CDC, the National Institutes of Health (NIH), the Health Resources and Services Administration (HRSA), the Substance Abuse and Mental Health Services Administration (SAMHSA), and the Indian Health Service (IHS). The initial phase, coordinated by the HHS Office of the Assistant Secretary of Health, will focus on geographic and demographic hotspots in 19 states, Washington, DC, and Puerto Rico, where the majority of the new HIV cases are reported, as well as in 7 states with a disproportionate occurrence of HIV in rural areas (eFigure in the Supplement). The strategic initiative includes 4 pillars: 1. diagnose all individuals with HIV as early as possible after infection; 2. treat HIV infection rapidly and effectively to achieve sustained viral suppression; 3. prevent at-risk individuals from acquiring HIV infection, including the use of pre-exposure prophylaxis (PrEP); and 4. rapidly detect and respond to emerging clusters of HIV infection to further reduce new transmissions. A key component for the success of this initiative is active partnerships with city, county, and state public health departments, local and regional clinics and health care facilities, clinicians, providers of medication-assisted treatment for opioid use disorder, and communityand faith-based organizations. The implementation of advances in HIV research achieved over 4 decades will be essential to achieving the goals of the initiative. Clinical studies serve as the scientific basis for strategies to prevent HIV transmission/acquisition. In this regard, as reviewed in a recent Viewpoint in JAMA,5 large clinical studies have recently proven the concept of undetectable = untransmittable (U = U), which has broad public health implications for HIV prevention and treatment at both the individual and societal level. U = U means that individuals with HIV who receive antiretroviral therapy (ART) and achieve and maintain an undetectable viral load do not sexually transmit HIV to others.5 U = U will be invaluable in helping to counteract the stigma associated with HIV, and this initiative will create environments in which all people, no matter their cultural background or risk profile, feel welcome for prevention and treatment services. Results from numerous clinical trials have led to significant advances in the treatment of HIV infection, such that a person living with HIV who is properly treated and adherent with therapy can expect to achieve a nearly normal lifespan. This progress is due to antiviral drug combinations drawn from more than 30 agents approved by the US Food and Drug Administration (FDA), as well as medications for the prevention and treatment regimens of HIV-associated coinfections and comorbidities. Furthermore, PrEP with a daily regimen of 2 oral antiretroviral drugs in a single pill has proven to be highly effective in preventing HIV infection for individuals at high risk. In addition, postexposure prophylaxis provides a highly efSupplemental content Opinion

Journal ArticleDOI
TL;DR: It is indicated that the combined action of microbe- microbe and host-microbe interactions drives microbiota differentiation at the root-soil interface.

Journal ArticleDOI
TL;DR: This is the first systematic review that assesses and summarizes clinical features and management of children with SARS-CoV-2 infection and concludes that Bronchial thickening and ground-glass opacities were the main radiologic features in asymptomatic patients.
Abstract: Importance The current rapid worldwide spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection justifies the global effort to identify effective preventive strategies and optimal medical management. While data are available for adult patients with coronavirus disease 2019 (COVID-19), limited reports have analyzed pediatric patients infected with SARS-CoV-2. Objective To evaluate currently reported pediatric cases of SARS-CoV-2 infection. Evidence Review An extensive search strategy was designed to retrieve all articles published from December 1, 2019, to March 3, 2020, by combining the termscoronavirusandcoronavirus infectionin several electronic databases (PubMed, Cochrane Library, and CINAHL), and following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines. Retrospective cross-sectional and case-control studies, case series and case reports, bulletins, and national reports about the pediatric SARS-CoV-2 infection were included. The risk of bias for eligible observational studies was assessed according to the Strengthening the Reporting of Observational Studies in Epidemiology reporting guideline. Findings A total of 815 articles were identified. Eighteen studies with 1065 participants (444 patients were younger than 10 years, and 553 were aged 10 to 19 years) with confirmed SARS-CoV-2 infection were included in the final analysis. All articles reflected research performed in China, except for 1 clinical case in Singapore. Children at any age were mostly reported to have mild respiratory symptoms, namely fever, dry cough, and fatigue, or were asymptomatic. Bronchial thickening and ground-glass opacities were the main radiologic features, and these findings were also reported in asymptomatic patients. Among the included articles, there was only 1 case of severe COVID-19 infection, which occurred in a 13-month-old infant. No deaths were reported in children aged 0 to 9 years. Available data about therapies were limited. Conclusions and Relevance To our knowledge, this is the first systematic review that assesses and summarizes clinical features and management of children with SARS-CoV-2 infection. The rapid spread of COVID-19 across the globe and the lack of European and US data on pediatric patients require further epidemiologic and clinical studies to identify possible preventive and therapeutic strategies.