scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A new set of parton distributions, NNPDF3.1, is presented, which updates NN PDF3.0, the first global set of PDFs determined using a methodology validated by a closure test, and investigates the impact of parametrizing charm and evidence that the accuracy and stability of the PDFs are improved.
Abstract: We present a new set of parton distributions, NNPDF3.1, which updates NNPDF3.0, the first global set of PDFs determined using a methodology validated by a closure test. The update is motivated by recent progress in methodology and available data, and involves both. On the methodological side, we now parametrize and determine the charm PDF alongside the light quarks and gluon ones, thereby increasing from seven to eight the number of independent PDFs. On the data side, we now include the D0 electron and muon W asymmetries from the final Tevatron dataset, the complete LHCb measurements of W and Z production in the forward region at 7 and 8 TeV, and new ATLAS and CMS measurements of inclusive jet and electroweak boson production. We also include for the first time top-quark pair differential distributions and the transverse momentum of the Z bosons from ATLAS and CMS. We investigate the impact of parametrizing charm and provide evidence that the accuracy and stability of the PDFs are thereby improved. We study the impact of the new data by producing a variety of determinations based on reduced datasets. We find that both improvements have a significant impact on the PDFs, with some substantial reductions in uncertainties, but with the new PDFs generally in agreement with the previous set at the one sigma level. The most significant changes are seen in the light-quark flavor separation, and in increased precision in the determination of the gluon. We explore the implications of NNPDF3.1 for LHC phenomenology at Run II, compare with recent LHC measurements at 13 TeV, provide updated predictions for Higgs production cross-sections and discuss the strangeness and charm content of the proton in light of our improved dataset and methodology. The NNPDF3.1 PDFs are delivered for the first time both as Hessian sets, and as optimized Monte Carlo sets with a compressed number of replicas.

921 citations


Journal ArticleDOI
TL;DR: It is demonstrated that high-quality, few-layer BP nanosheets, with controllable size and observable photoluminescence, can be produced in large quantities by liquid phase exfoliation under ambient conditions in solvents such as N-cyclohexyl-2-pyrrolidone (CHP).
Abstract: Few-layer black phosphorus (BP) is a new two-dimensional material which is of great interest for applications, mainly in electronics. However, its lack of environmental stability severely limits its synthesis and processing. Here we demonstrate that high-quality, few-layer BP nanosheets, with controllable size and observable photoluminescence, can be produced in large quantities by liquid phase exfoliation under ambient conditions in solvents such as N-cyclohexyl-2-pyrrolidone (CHP). Nanosheets are surprisingly stable in CHP, probably due to the solvation shell protecting the nanosheets from reacting with water or oxygen. Experiments, supported by simulations, show reactions to occur only at the nanosheet edge, with the rate and extent of the reaction dependent on the water/oxygen content. We demonstrate that liquid-exfoliated BP nanosheets are potentially useful in a range of applications from ultrafast saturable absorbers to gas sensors to fillers for composite reinforcement.

921 citations


Journal ArticleDOI
06 Mar 2015-eLife
TL;DR: Optogenetic manipulations indicate that the hypothalamus plays an integral role to instantiate emotion states, and is not simply a passive effector of upstream emotion centers.
Abstract: Defensive behaviors reflect underlying emotion states, such as fear. The hypothalamus plays a role in such behaviors, but prevailing textbook views depict it as an effector of upstream emotion centers, such as the amygdala, rather than as an emotion center itself. We used optogenetic manipulations to probe the function of a specific hypothalamic cell type that mediates innate defensive responses. These neurons are sufficient to drive multiple defensive actions, and required for defensive behaviors in diverse contexts. The behavioral consequences of activating these neurons, moreover, exhibit properties characteristic of emotion states in general, including scalability, (negative) valence, generalization and persistence. Importantly, these neurons can also condition learned defensive behavior, further refuting long-standing claims that the hypothalamus is unable to support emotional learning and therefore is not an emotion center. These data indicate that the hypothalamus plays an integral role to instantiate emotion states, and is not simply a passive effector of upstream emotion centers.

921 citations



Journal ArticleDOI
TL;DR: In this article, advances in the strategies for the visible light activation, origin of visible light activity, and electronic structure of various visible-light active TiO 2 photocatalysts are discussed in detail.
Abstract: The remarkable achievement by Fujishima and Honda (1972) in the photo-electrochemical water splitting results in the extensive use of TiO 2 nanomaterials for environmental purification and energy storage/conversion applications. Though there are many advantages for the TiO 2 compared to other semiconductor photocatalysts, its band gap of 3.2 eV restrains application to the UV-region of the electromagnetic spectrum ( λ ≤ 387.5 nm). As a result, development of visible-light active titanium dioxide is one of the key challenges in the field of semiconductor photocatalysis. In this review, advances in the strategies for the visible light activation, origin of visible-light activity, and electronic structure of various visible-light active TiO 2 photocatalysts are discussed in detail. It has also been shown that if appropriate models are used, the theoretical insights can successfully be employed to develop novel catalysts to enhance the photocatalytic performance in the visible region. Recent developments in theory and experiments in visible-light induced water splitting, degradation of environmental pollutants, water and air purification and antibacterial applications are also reviewed. Various strategies to identify appropriate dopants for improved visible-light absorption and electron–hole separation to enhance the photocatalytic activity are discussed in detail, and a number of recommendations are also presented.

921 citations


Journal ArticleDOI
Željko Ivezić1, Steven M. Kahn2, J. Anthony Tyson3, Bob Abel4  +332 moreInstitutions (55)
TL;DR: The Large Synoptic Survey Telescope (LSST) as discussed by the authors is a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile.
Abstract: We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky, and mapping the Milky Way. LSST will be a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachon in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2 field of view, a 3.2-gigapixel camera, and six filters (ugrizy) covering the wavelength range 320–1050 nm. The project is in the construction phase and will begin regular survey operations by 2022. About 90% of the observing time will be devoted to a deep-wide-fast survey mode that will uniformly observe a 18,000 deg2 region about 800 times (summed over all six bands) during the anticipated 10 yr of operations and will yield a co-added map to r ~ 27.5. These data will result in databases including about 32 trillion observations of 20 billion galaxies and a similar number of stars, and they will serve the majority of the primary science programs. The remaining 10% of the observing time will be allocated to special projects such as Very Deep and Very Fast time domain surveys, whose details are currently under discussion. We illustrate how the LSST science drivers led to these choices of system parameters, and we describe the expected data products and their characteristics.

921 citations


Posted Content
TL;DR: In this paper, the first meta-generalized gradient approximation (GGA) that is fully constrained, obeying all 17 known exact constraints that a meta-GGA can, was proposed.
Abstract: The ground-state energy, electron density, and related properties of ordinary matter can be computed efficiently when the exchange-correlation energy as a functional of the density is approximated semilocally. We propose the first meta-GGA (meta-generalized gradient approximation) that is fully constrained, obeying all 17 known exact constraints that a meta-GGA can. It is also exact or nearly exact for a set of appropriate norms, including rare-gas atoms and nonbonded interactions. This SCAN (strongly constrained and appropriately normed) meta-GGA achieves remarkable accuracy for systems where the exact exchange-correlation hole is localized near its electron, and especially for lattice constants and weak interactions.

920 citations


Journal ArticleDOI
TL;DR: It is shown that an all-inorganic version of the lead bromide perovskite material works equally well as the organic one, in particular generating the high open circuit voltages that are an important feature of these cells.
Abstract: Hybrid organic–inorganic lead halide perovskite photovoltaic cells have already surpassed 20% conversion efficiency in the few years that they have been seriously studied. However, many fundamental questions still remain unanswered as to why they are so good. One of these is “Is the organic cation really necessary to obtain high quality cells?” In this study, we show that an all-inorganic version of the lead bromide perovskite material works equally well as the organic one, in particular generating the high open circuit voltages that are an important feature of these cells.

920 citations


Journal ArticleDOI
TL;DR: It is suggested that widespread application of sewage sludge from municipal wastewater treatment plants (WWTPs) to farmlands is likely to represent a major input of MPs to agricultural soils, with unknown consequences for sustainability and food security.
Abstract: D to their ubiquitous distribution and chemical composition, microplastics (MPs) are increasingly being recognized as a global concern. While it is widely acknowledged thatMPs in the ocean are a serious issue with potentially negative effects onmarine organisms, information aboutMPs in terrestrial and freshwater environments is fragmentary. Based on new MP emission estimates in industrialized countries, we suggest that widespread application of sewage sludge frommunicipal wastewater treatment plants (WWTPs) to farmlands is likely to represent a major input of MPs to agricultural soils, with unknown consequences for sustainability and food security. Terrestrial emissions are the dominant source of MPs, including those conveyed to receiving waters by WWTP outfalls. Recent reports based on product life cycle data offer the first quantitative insights into national MP emission inventories. MPs originate predominantly from automobile tire wear, household and laundry dust, industrial processes (e.g., blasting and deflashing of plastics), and through deterioration of surfaces made of or coated with plastic, for example, artificial turf and polymeric paint. Most of these emissions occur in urban and residential areas. In developed regions, municipal/industrial effluents and even diffuse urban runoff are eventually conveyed to WWTPs. During wastewater treatment, over 90% of MPs are retained in sewage sludge. Effectiveness of MP retention is dependent on particle density and size. MPs with a density greater than water are almost completely retained in sewage sludge during primary and secondary treatment. Tertiary filtration treatment effectively removes larger floating particles, while smaller and lighter particles, expectedly, are released with wastewater effluents. The use of sewage sludge as fertilizer for agricultural applications is often economically advantageous and is common in many developed regions. In Europe and North America about 50% of sewage sludge is processed for agricultural use. Using national data on farm areas, population and sewage sludge fate (http://ec.europa.eu/eurostat), with estimates of MP emissions and applying broad but conservative uncertainty ranges, we estimate that between 125 and 850 tons MP/million inhabitants are added annually to European agricultural soils either through direct application of sewage sludge or as processed biosolids. This is at least equal to, and probably much higher than our estimate of 110 to 180 tons MP/million inhabitants emitted annually to surface waters based on refs 1−3. In Europe, in fact, between 1270 and 2130 tons MPs/million inhabitants are released to urban environments, annually. Conservatively assuming that 10−90% of MPs produced from road wear and debris from building coating are collected by sewers, between 360 and 1980 tons MPs are expected to reach municipal WWTPs. Here, an uncertain fraction of MPs from car tire debris (conservatively, 20−80%) and >90% of MPs from personal care products are likely to be retained in sludge, giving a total input of between 250 and 1700 tons/million inhabitants each year. Sludge application to agricultural land was calculated as the sum of direct application and application of processed biosolids, excluding the fraction of wastewater sludge incinerated, disposed in landfills or subject to other nonagricultural uses. These figures are highly conservative as sludge is only ever applied to a small percentage of agricultural land. There is a broad range of sludge application rates and intensities to European agricultural land (Figure 1). Application rates (estimated as compost plus direct application) range from 0 to 91%, with an average of 43%. This equates to average and maximum areal per-capita loadings of 0.2 and 8 mg MP/ha/yr. MPs inputs estimated here possibly reflect the situation in other countries with similar socioeconomic conditions and/or similar use of plastics (e.g., in Asia and the Americas). A rough extrapolation from data in refs 1−3 produces a total yearly input of 63 000−430 000 and 44 000−300 000 tons MPs to European and North American farmlands, respectively. This would be an alarmingly high input. Comprehensively, this exceeds the total accumulated burden of 93 000−236 000 tons MPs currently estimated to be present in surface water in the global oceans.

920 citations


Journal ArticleDOI
TL;DR: A need to understand the processes and role of oxidative stress in neurodegenerative diseases is understood, with a focus on the pivotal role played by OS in mitochondrial dysfunction.
Abstract: Oxidative stress is proposed as a regulatory element in ageing and various neurological disorders. The excess of oxidants causes a reduction of antioxidants, which in turn produce an oxidation–reduction imbalance in organisms. Paucity of the antioxidant system generates oxidative-stress, characterized by elevated levels of reactive species (oxygen, hydroxyl free radical, and so on). Mitochondria play a key role in ATP supply to cells via oxidative phosphorylation, as well as synthesis of essential biological molecules. Various redox reactions catalyzed by enzymes take place in the oxidative phosphorylation process. An inefficient oxidative phosphorylation may generate reactive oxygen species (ROS), leading to mitochondrial dysfunction. Mitochondrial redox metabolism, phospholipid metabolism, and proteolytic pathways are found to be the major and potential source of free radicals. A lower concentration of ROS is essential for normal cellular signaling, whereas the higher concentration and long-time exposure of ROS cause damage to cellular macromolecules such as DNA, lipids and proteins, ultimately resulting in necrosis and apoptotic cell death. Normal and proper functioning of the central nervous system (CNS) is entirely dependent on the chemical integrity of brain. It is well established that the brain consumes a large amount of oxygen and is highly rich in lipid content, becoming prone to oxidative stress. A high consumption of oxygen leads to excessive production of ROS. Apart from this, the neuronal membranes are found to be rich in polyunsaturated fatty acids, which are highly susceptible to ROS. Various neurodegenerative diseases such as Parkinson’s disease (PD), Alzheimer’s disease (AD), Huntington’s disease (HD), and amyotrophic lateral sclerosis (ALS), among others, can be the result of biochemical alteration (due to oxidative stress) in bimolecular components. There is a need to understand the processes and role of oxidative stress in neurodegenerative diseases. This review is an effort towards improving our understanding of the pivotal role played by OS in neurodegenerative disorders.

920 citations


Journal ArticleDOI
TL;DR: The HZ/su vaccine significantly reduced the risk of herpes zoster in adults who were 50 years of age or older and was similar to that in the other two age groups.
Abstract: We conducted a randomized, placebo-controlled, phase 3 study in 18 countries to evaluate the efficacy and safety of HZ/su in older adults (≥50 years of age), stratified according to age group (50 to 59, 60 to 69, and ≥70 years). Participants received two intramuscular doses of the vaccine or placebo 2 months apart. The primary objective was to assess the efficacy of the vaccine, as compared with placebo, in reducing the risk of herpes zoster in older adults. Results A total of 15,411 participants who could be evaluated received either the vaccine (7698 participants) or placebo (7713 participants). During a mean follow-up of 3.2 years, herpes zoster was confirmed in 6 participants in the vaccine group and in 210 participants in the placebo group (incidence rate, 0.3 vs. 9.1 per 1000 personyears) in the modified vaccinated cohort. Overall vaccine efficacy against herpes zoster was 97.2% (95% confidence interval [CI], 93.7 to 99.0; P<0.001). Vaccine efficacy was between 96.6% and 97.9% for all age groups. Solicited reports of injection-site and systemic reactions within 7 days after vaccination were more frequent in the vaccine group. There were solicited or unsolicited reports of grade 3 symptoms in 17.0% of vaccine recipients and 3.2% of placebo recipients. The proportions of participants who had serious adverse events or potential immune-mediated diseases or who died were similar in the two groups. Conclusions The HZ/su vaccine significantly reduced the risk of herpes zoster in adults who were 50 years of age or older. Vaccine efficacy in adults who were 70 years of age or older was similar to that in the other two age groups. (Funded by GlaxoSmithKline Biologicals; ZOE-50 ClinicalTrials.gov number, NCT01165177.)

Journal ArticleDOI
TL;DR: While previously polarization was primarily seen only in issue-based terms, a new type of division has emerged in the mass public in recent years: Ordinary Americans increasingly dislike and distru...
Abstract: While previously polarization was primarily seen only in issue-based terms, a new type of division has emerged in the mass public in recent years: Ordinary Americans increasingly dislike and distru...

Journal ArticleDOI
TL;DR: A more general mathematical model for real-time high-capacity ride-sharing that scales to large numbers of passengers and trips and dynamically generates optimal routes with respect to online demand and vehicle locations is presented.
Abstract: Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.

Journal ArticleDOI
TL;DR: The throughput and resolution of the sequencing approach permitted to detect specific structural shifts at the level of individual microbial taxa that harbours a novel potential for managing the soil environment by means of promoting beneficial and suppressing detrimental organisms.
Abstract: Low-input agricultural systems aim at reducing the use of synthetic fertilizers and pesticides in order to improve sustainable production and ecosystem health. Despite the integral role of the soil microbiome in agricultural production, we still have a limited understanding of the complex response of microbial diversity to organic and conventional farming. Here we report on the structural response of the soil microbiome to more than two decades of different agricultural management in a long-term field experiment using a high-throughput pyrosequencing approach of bacterial and fungal ribosomal markers. Organic farming increased richness, decreased evenness, reduced dispersion and shifted the structure of the soil microbiota when compared with conventionally managed soils under exclusively mineral fertilization. This effect was largely attributed to the use and quality of organic fertilizers, as differences became smaller when conventionally managed soils under an integrated fertilization scheme were examined. The impact of the plant protection regime, characterized by moderate and targeted application of pesticides, was of subordinate importance. Systems not receiving manure harboured a dispersed and functionally versatile community characterized by presumably oligotrophic organisms adapted to nutrient-limited environments. Systems receiving organic fertilizer were characterized by specific microbial guilds known to be involved in degradation of complex organic compounds such as manure and compost. The throughput and resolution of the sequencing approach permitted to detect specific structural shifts at the level of individual microbial taxa that harbours a novel potential for managing the soil environment by means of promoting beneficial and suppressing detrimental organisms.

Journal ArticleDOI
TL;DR: Using a range of ocean temperature data including global records of daily satellite observations, daily in situ measurements and gridded monthly in situ-based data sets, this work identifies significant increases in marine heatwaves over the past century.
Abstract: Heatwaves are important climatic extremes in atmospheric and oceanic systems that can have devastating and long-term impacts on ecosystems, with subsequent socioeconomic consequences. Recent prominent marine heatwaves have attracted considerable scientific and public interest. Despite this, a comprehensive assessment of how these ocean temperature extremes have been changing globally is missing. Using a range of ocean temperature data including global records of daily satellite observations, daily in situ measurements and gridded monthly in situ-based data sets, we identify significant increases in marine heatwaves over the past century. We find that from 1925 to 2016, global average marine heatwave frequency and duration increased by 34% and 17%, respectively, resulting in a 54% increase in annual marine heatwave days globally. Importantly, these trends can largely be explained by increases in mean ocean temperatures, suggesting that we can expect further increases in marine heatwave days under continued global warming.

Posted Content
TL;DR: This paper analyzes the convergence of Federated Averaging on non-iid data and establishes a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs.
Abstract: Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate $\eta$ must decay, even if full-gradient is used; otherwise, the solution will be $\Omega (\eta)$ away from the optimal.

Journal ArticleDOI
TL;DR: An overview of machine learning from an applied perspective focuses on the relatively mature methods of support vector machines, single decision trees (DTs), Random Forests, boosted DTs, artificial neural networks, and k-nearest neighbours (k-NN).
Abstract: Machine learning offers the potential for effective and efficient classification of remotely sensed imagery. The strengths of machine learning include the capacity to handle data of high dimensionality and to map classes with very complex characteristics. Nevertheless, implementing a machine-learning classification is not straightforward, and the literature provides conflicting advice regarding many key issues. This article therefore provides an overview of machine learning from an applied perspective. We focus on the relatively mature methods of support vector machines, single decision trees (DTs), Random Forests, boosted DTs, artificial neural networks, and k-nearest neighbours (k-NN). Issues considered include the choice of algorithm, training data requirements, user-defined parameter selection and optimization, feature space impacts and reduction, and computational costs. We illustrate these issues through applying machine-learning classification to two publically available remotely sensed dat...

Proceedings ArticleDOI
06 Mar 2018
TL;DR: This article extended the self-attention mechanism to consider representations of the relative positions, or distances between sequence elements, and showed that relative and absolute position representations yields no further improvement in translation quality.
Abstract: Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graph-labeled inputs.

Journal ArticleDOI
TL;DR: Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches, and validates that collaborative data analyses can readily be used across brain phenotypes and disorders.
Abstract: The profile of brain structural abnormalities in schizophrenia is still not fully understood, despite decades of research using brain scans. To validate a prospective meta-analysis approach to analyzing multicenter neuroimaging data, we analyzed brain MRI scans from 2028 schizophrenia patients and 2540 healthy controls, assessed with standardized methods at 15 centers worldwide. We identified subcortical brain volumes that differentiated patients from controls, and ranked them according to their effect sizes. Compared with healthy controls, patients with schizophrenia had smaller hippocampus (Cohen's d=-0.46), amygdala (d=-0.31), thalamus (d=-0.31), accumbens (d=-0.25) and intracranial volumes (d=-0.12), as well as larger pallidum (d=0.21) and lateral ventricle volumes (d=0.37). Putamen and pallidum volume augmentations were positively associated with duration of illness and hippocampal deficits scaled with the proportion of unmedicated patients. Worldwide cooperative analyses of brain imaging data support a profile of subcortical abnormalities in schizophrenia, which is consistent with that based on traditional meta-analytic approaches. This first ENIGMA Schizophrenia Working Group study validates that collaborative data analyses can readily be used across brain phenotypes and disorders and encourages analysis and data sharing efforts to further our understanding of severe mental illness.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, an approximate gradient for rasterization is proposed to enable the integration of rendering into neural networks, which enables single-image 3D mesh reconstruction with silhouette image supervision.
Abstract: For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.

Proceedings Article
12 Feb 2016
TL;DR: A novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information directly, and which outperforms other stat-of-the-art models in such tasks.
Abstract: In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by Perozzi et al. (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skip-gram model with negative sampling proposed by Mikolov et al. (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks.

Posted Content
Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu1, Alan L. Yuille2 
TL;DR: An attention mechanism that learns to softly weight the multi-scale features at each pixel location is proposed, which not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales.
Abstract: Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.

Journal Article
TL;DR: In this paper, a method to train quantized neural networks (QNNs) with extremely low precision (e.g., 1-bit) weights and activations, at run-time is introduced.
Abstract: We introduce a method to train Quantized Neural Networks (QNNs) -- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.

Journal ArticleDOI
TL;DR: In this paper, the authors offer practical suggestions for students new to qualitative research for both writing interview protocol that elicit useful data and for conducting the interview, which can be used by professors teaching qualitative research in conjunction with academic readings about qualitative interviewing.
Abstract: Students new to doing qualitative research in the ethnographic and oral traditions, often have difficulty creating successful interview protocols This article offers practical suggestions for students new to qualitative research for both writing interview protocol that elicit useful data and for conducting the interview This piece was originally developed as a classroom tool and can be used by professors teaching qualitative research in conjunction with academic readings about qualitative interviewing

Journal ArticleDOI
TL;DR: Current and future increase in food production must go along with production of food with better quality and with less toxic contaminants, which requires more cautious use of agrochemical through prior testing, careful risk assessment, and licensing.
Abstract: Agrochemicals have enabled to more than duplicate food production during the last century, and the current need to increase food production to feed a rapid growing human population maintains pressure on the intensive use of pesticides and fertilizers. However, worldwide surveys have documented the contamination and impact of agrochemical residues in soils, and terrestrial and aquatic ecosystems including coastal marine systems, and their toxic effects on humans and nonhuman biota. Although persistent organic chemicals have been phased out and replaced by more biodegradable chemicals, contamination by legacy residues and recent residues still impacts on the quality of human food, water, and environment. Current and future increase in food production must go along with production of food with better quality and with less toxic contaminants. Alternative paths to the intensive use of crop protection chemicals are open, such as genetically engineered organisms, organic farming, change of dietary habits, and development of food technologies. Agro industries need to further develop advanced practices to protect public health, which requires more cautious use of agrochemicals through prior testing, careful risk assessment, and licensing, but also through education of farmers and users in general, measures for better protection of ecosystems, and good practices for sustainable development of agriculture, fisheries, and aquaculture. Enhanced scientific research for new developments in food production and food safety, as well as for environmental protection, is a necessary part of this endeavor. Furthermore, worldwide agreement on good agriculture practices, including development of genetically modified organisms (GMOs) and their release for international agriculture, may be urgent to ensure the success of safe food production.

Journal ArticleDOI
25 Jun 2020
TL;DR: Examination of the comorbid conditions, the progression of the disease, and mortality rates in patients of all ages, infected with the ongoing COVID-19 disease found that patients withComorbidities should take all necessary precautions to avoid getting infected with SARS CoV-2, as they usually have the worst prognosis.
Abstract: A novel human coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was identified in Wuhan, China, in December 2019. Since then, the virus has made its way across the globe to affect over 180 countries. SARS-CoV-2 has infected humans in all age groups, of all ethnicities, both males and females while spreading through communities at an alarming rate. Given the nature of this virus, there is much still to be learned; however, we know that the clinical manifestations range from a common cold to more severe diseases such as bronchitis, pneumonia, severe acute respiratory distress syndrome (ARDS), multi-organ failure, and even death. It is believed that COVID-19, in those with underlying health conditions or comorbidities, has an increasingly rapid and severe progression, often leading to death. This paper examined the comorbid conditions, the progression of the disease, and mortality rates in patients of all ages, infected with the ongoing COVID-19 disease. An electronic literature review search was performed, and applicable data was then collected from peer-reviewed articles published from January to April 20, 2020. From what is known at the moment, patients with COVID-19 disease who have comorbidities, such as hypertension or diabetes mellitus, are more likely to develop a more severe course and progression of the disease. Furthermore, older patients, especially those 65 years old and above who have comorbidities and are infected, have an increased admission rate into the intensive care unit (ICU) and mortality from the COVID-19 disease. Patients with comorbidities should take all necessary precautions to avoid getting infected with SARS CoV-2, as they usually have the worst prognosis.

Journal ArticleDOI
20 Jan 2017-Science
TL;DR: The relationships between SCNA levels, tumor mutations, and cancer hallmarks are examined to find that two hallmarks of cancer, cell proliferation and immune evasion, are predicted by distinct types of aneuploidy that likely act through distinct mechanisms.
Abstract: INTRODUCTION Aneuploidy, also known as somatic copy number alterations (SCNAs), is widespread in human cancers and has been proposed to drive tumorigenesis. The relationship between SCNAs and the characteristic functional features or “hallmarks” of cancer is not well understood. Among these cancer hallmarks is immune evasion, which is accomplished by neoantigen editing, defects in antigen presentation and inhibition of tumor infiltration, and/or cytotoxic activities of immune cells. Whether and how tumor SCNA levels influence immune evasion is of particular interest as this information could potentially be used to improve the efficacy of immune checkpoint blockade, a therapy that has produced durable responses in a subset of cancer patients. RATIONALE Understanding how SCNAs and mutation load affect tumor evolution, and through what mechanisms, is a key objective in cancer research. To explore the relationships between SCNA levels, tumor mutations, and cancer hallmarks, we examined data from 5255 tumor/normal samples representing 12 cancer types from The Cancer Genome Atlas project. We assigned each tumor an SCNA score and looked for correlations with the number and types of tumor mutations. We also compared the gene expression profiles of tumors with high versus low SCNA levels to identify differences in cellular signaling pathways. RESULTS First, we found that, for most tumors, there was a positive correlation between SCNA levels and the total number of mutations. Second, tumors harboring activating oncogenic mutations in the receptor tyrosine kinase–RAS–phosphatidylinositol 3-kinase pathway showed fewer SCNAs, a finding at odds with the hypothesis of oncogene-driven genomic instability. Third, we found that tumors with high levels of SCNAs showed elevated expression of cell cycle and cell proliferation markers (cell cycle signature) and reduced expression of markers for cytotoxic immune cell infiltrates (immune signature). The increased expression level of the cell cycle signature was primarily predicted by focal SCNAs, with a lesser contribution of arm and whole-chromosome SCNAs. In contrast, the lower expression level of the immune signature was primarily predicted by high levels of arm and whole-chromosome SCNAs. SCNA levels were a stronger predictor of markers of cytotoxic immune cell infiltration than tumor mutational load. Finally, through analysis of data from two published clinical trials of immunotherapy in melanoma patients, we found that high SCNA levels in tumors correlated with poorer survival of patients. The combination of the tumor SCNA score and the tumor mutational load was a better predictor of survival after immunotherapy than either biomarker alone. CONCLUSION We found that two hallmarks of cancer, cell proliferation and immune evasion, are predicted by distinct types of aneuploidy that likely act through distinct mechanisms. Proliferation markers mainly correlated with focal SCNAs, implying a mechanism related to the action of specific genes targeted by these SCNAs. Immune evasion markers mainly correlated with arm- and chromosome-level SCNAs, consistent with a mechanism related to general gene dosage imbalance rather than the action of specific genes. A retrospective analysis of melanoma patients treated with immune checkpoint blockade anti–CTLA-4 (cytotoxic T lymphocyte–associated protein 4) therapy revealed that high SCNA levels were associated with a poorer response, suggesting that tumor aneuploidy might be a useful biomarker for predicting which patients are most likely to benefit from this therapy.

Journal ArticleDOI
TL;DR: It is discovered and demonstrated that ferroptosis, a programmed iron-dependent cell death, as a mechanism in murine models of doxorubicin (DOX)- and ischemia/reperfusion (I/R)-induced cardiomyopathy and Mitochondria-targeted antioxidant MitoTEMPO significantly rescued DOX cardiopathy, supporting oxidative damage of mitochondria as a major mechanism in ferroPTosis-induced heart damage.
Abstract: Heart disease is the leading cause of death worldwide. A key pathogenic factor in the development of lethal heart failure is loss of terminally differentiated cardiomyocytes. However, mechanisms of cardiomyocyte death remain unclear. Here, we discovered and demonstrated that ferroptosis, a programmed iron-dependent cell death, as a mechanism in murine models of doxorubicin (DOX)- and ischemia/reperfusion (I/R)-induced cardiomyopathy. In canonical apoptosis and/or necroptosis-defective Ripk3−/−, Mlkl−/−, or Fadd−/−Mlkl−/− mice, DOX-treated cardiomyocytes showed features of typical ferroptotic cell death. Consistently, compared with dexrazoxane, the only FDA-approved drug for treating DOX-induced cardiotoxicity, inhibition of ferroptosis by ferrostatin-1 significantly reduced DOX cardiomyopathy. RNA-sequencing results revealed that heme oxygenase-1 (Hmox1) was significantly up-regulated in DOX-treated murine hearts. Administering DOX to mice induced cardiomyopathy with a rapid, systemic accumulation of nonheme iron via heme degradation by Nrf2-mediated up-regulation of Hmox1, which effect was abolished in Nrf2-deficent mice. Conversely, zinc protoporphyrin IX, an Hmox1 antagonist, protected the DOX-treated mice, suggesting free iron released on heme degradation is necessary and sufficient to induce cardiac injury. Given that ferroptosis is driven by damage to lipid membranes, we further investigated and found that excess free iron accumulated in mitochondria and caused lipid peroxidation on its membrane. Mitochondria-targeted antioxidant MitoTEMPO significantly rescued DOX cardiomyopathy, supporting oxidative damage of mitochondria as a major mechanism in ferroptosis-induced heart damage. Importantly, ferrostatin-1 and iron chelation also ameliorated heart failure induced by both acute and chronic I/R in mice. These findings highlight that targeting ferroptosis serves as a cardioprotective strategy for cardiomyopathy prevention.

Journal ArticleDOI
TL;DR: Results suggest eye care services contributed to the observed reduction of age-standardised rates of avoidable blindness but not of MSVI, and that the target in an ageing global population was not reached.

Journal ArticleDOI
Bin Zhou1, Rodrigo M. Carrillo-Larco1, Goodarz Danaei2, Leanne M. Riley2  +1141 moreInstitutions (5)
TL;DR: In this article, a Bayesian hierarchical model was used to estimate the prevalence of hypertension and the proportion of people with hypertension who had a previous diagnosis (detection), who were taking medication for hypertension (treatment), and whose hypertension was controlled to below 140/90 mm Hg (control).