scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
18 Mar 2016-Science
TL;DR: By transiently colonizing pregnant female mice, it is shown that the maternal microbiota shapes the immune system of the offspring and Pups born to mothers transiently Colonized in pregnancy are better able to avoid inflammatory responses to microbial molecules and penetration of intestinal microbes.
Abstract: Postnatal colonization of the body with microbes is assumed to be the main stimulus to postnatal immune development. By transiently colonizing pregnant female mice, we show that the maternal microbiota shapes the immune system of the offspring. Gestational colonization increases intestinal group 3 innate lymphoid cells and F4/80(+)CD11c(+) mononuclear cells in the pups. Maternal colonization reprograms intestinal transcriptional profiles of the offspring, including increased expression of genes encoding epithelial antibacterial peptides and metabolism of microbial molecules. Some of these effects are dependent on maternal antibodies that potentially retain microbial molecules and transmit them to the offspring during pregnancy and in milk. Pups born to mothers transiently colonized in pregnancy are better able to avoid inflammatory responses to microbial molecules and penetration of intestinal microbes.

832 citations


Journal ArticleDOI
TL;DR: In this paper, the International Max Planck Research School for Astronomy and Astrophysics at the Universities of Bonn and Cologne (IMPRS Bonn/Cologne); Estonian Research Council [IUT26-2]; European Regional Development Fund [TK133]; Australian Research Council Future Fellowship [FT150100024]; NSF CAREER grant [AST-1149491]
Abstract: Deutsche Forschungsgemeinschaft (DFG) [KA1265/5-1, KA1265/5-2, KE757/71, KE757/7-2, KE757/7-3, KE757/11-1.]; International Max Planck Research School for Astronomy and Astrophysics at the Universities of Bonn and Cologne (IMPRS Bonn/Cologne); Estonian Research Council [IUT26-2]; European Regional Development Fund [TK133]; Australian Research Council Future Fellowship [FT150100024]; NSF CAREER grant [AST-1149491]

832 citations


Posted Content
TL;DR: This work proves why stochastic gradient descent can find global minima on the training objective of DNNs in $\textit{polynomial time}$ and implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting.
Abstract: Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find $\textit{global minima}$ on the training objective of DNNs in $\textit{polynomial time}$. We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: $\textit{polynomial}$ in $L$, the number of layers and in $n$, the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100% training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in $n,L$. Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).

832 citations


Posted Content
TL;DR: An efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: a self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment.
Abstract: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.

832 citations


Journal ArticleDOI
TL;DR: Although the unexplained higher incidence of hospitalization for dengue in year 3 among children younger than 9 years of age needs to be carefully monitored during long-term follow-up, the risk among children 2 to 16 years ofAge was lower in the vaccine group than in the control group.
Abstract: BACKGROUND A candidate tetravalent dengue vaccine is being assessed in three clinical trials involving more than 35,000 children between the ages of 2 and 16 years in Asian– Pacific and Latin American countries. We report the results of long-term follow-up interim analyses and integrated efficacy analyses. METHODS We are assessing the incidence of hospitalization for virologically confirmed dengue as a surrogate safety end point during follow-up in years 3 to 6 of two phase 3 trials, CYD14 and CYD15, and a phase 2b trial, CYD23/57. We estimated vaccine efficacy using pooled data from the first 25 months of CYD14 and CYD15. RESULTS Follow-up data were available for 10,165 of 10,275 participants (99%) in CYD14 and 19,898 of 20,869 participants (95%) in CYD15. Data were available for 3203 of the 4002 participants (80%) in the CYD23 trial included in CYD57. During year 3 in the CYD14, CYD15, and CYD57 trials combined, hospitalization for virologically confirmed dengue occurred in 65 of 22,177 participants in the vaccine group and 39 of 11,089 participants in the control group. Pooled relative risks of hospitalization for dengue were 0.84 (95% confidence interval [CI], 0.56 to 1.24) among all participants, 1.58 (95% CI, 0.83 to 3.02) among those under the age of 9 years, and 0.50 (95% CI, 0.29 to 0.86) among those 9 years of age or older. During year 3, hospitalization for severe dengue, as defined by the independent data monitoring committee criteria, occurred in 18 of 22,177 participants in the vaccine group and 6 of 11,089 participants in the control group. Pooled rates of efficacy for symptomatic dengue during the first 25 months were 60.3% (95% CI, 55.7 to 64.5) for all participants, 65.6% (95% CI, 60.7 to 69.9) for those 9 years of age or older, and 44.6% (95% CI, 31.6 to 55.0) for those younger than 9 years of age. CONCLUSIONS Although the unexplained higher incidence of hospitalization for dengue in year 3 among children younger than 9 years of age needs to be carefully monitored during long-term follow-up, the risk among children 2 to 16 years of age was lower in the vaccine group than in the control group. (Funded by Sanofi Pasteur; ClinicalTrials .gov numbers, NCT00842530, NCT01983553, NCT01373281, and NCT01374516.)

831 citations


Posted Content
TL;DR: In this paper, the authors consider identification, estimation, and inference procedures for treatment effect parameters using Difference-in-Differences (DiD) with multiple time periods, variation in treatment timing, and when the "parallel trends assumption" holds potentially only after conditioning on observed covariates.
Abstract: In this article, we consider identification, estimation, and inference procedures for treatment effect parameters using Difference-in-Differences (DiD) with (i) multiple time periods, (ii) variation in treatment timing, and (iii) when the "parallel trends assumption" holds potentially only after conditioning on observed covariates. We show that a family of causal effect parameters are identified in staggered DiD setups, even if differences in observed characteristics create non-parallel outcome dynamics between groups. Our identification results allow one to use outcome regression, inverse probability weighting, or doubly-robust estimands. We also propose different aggregation schemes that can be used to highlight treatment effect heterogeneity across different dimensions as well as to summarize the overall effect of participating in the treatment. We establish the asymptotic properties of the proposed estimators and prove the validity of a computationally convenient bootstrap procedure to conduct asymptotically valid simultaneous (instead of pointwise) inference. Finally, we illustrate the relevance of our proposed tools by analyzing the effect of the minimum wage on teen employment from 2001--2007. Open-source software is available for implementing the proposed methods.

831 citations


Posted Content
TL;DR: A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today, including few-shot learning, reinforcement learning and architecture search, and promising applications and successes.
Abstract: The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent years. Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple learning episodes. This paradigm provides an opportunity to tackle many conventional challenges of deep learning, including data and computation bottlenecks, as well as generalization. This survey describes the contemporary meta-learning landscape. We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning and hyperparameter optimization. We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods today. We survey promising applications and successes of meta-learning such as few-shot learning and reinforcement learning. Finally, we discuss outstanding challenges and promising areas for future research.

831 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a photodetector based on the graphene/MoS2 heterostructure is able to provide a high photogain greater than 108 and graphene is transferable onto MoS2.
Abstract: Due to its high carrier mobility, broadband absorption, and fast response time, the semi-metallic graphene is attractive for optoelectronics. Another two-dimensional semiconducting material molybdenum disulfide (MoS2) is also known as light- sensitive. Here we show that a large-area and continuous MoS2 monolayer is achievable using a CVD method and graphene is transferable onto MoS2. We demonstrate that a photodetector based on the graphene/MoS2 heterostructure is able to provide a high photogain greater than 10(8). Our experiments show that the electron-hole pairs are produced in the MoS2 layer after light absorption and subsequently separated across the layers. Contradictory to the expectation based on the conventional built-in electric field model for metal-semiconductor contacts, photoelectrons are injected into the graphene layer rather than trapped in MoS2 due to the presence of a perpendicular effective electric field caused by the combination of the built-in electric field, the applied electrostatic field, and charged impurities or adsorbates, resulting in a tuneable photoresponsivity.

831 citations


Journal ArticleDOI
TL;DR: Access to this growing chemical toolbox of new molecular probes for H2S and related RSS sets the stage for applying these developing technologies to probe reactive sulfur biology in living systems.
Abstract: Hydrogen sulfide (H2S), a gaseous species produced by both bacteria and higher eukaryotic organisms, including mammalian vertebrates, has attracted attention in recent years for its contributions to human health and disease. H2S has been proposed as a cytoprotectant and gasotransmitter in many tissue types, including mediating vascular tone in blood vessels as well as neuromodulation in the brain. The molecular mechanisms dictating how H2S affects cellular signaling and other physiological events remain insufficiently understood. Furthermore, the involvement of H2S in metal-binding interactions and formation of related RSS such as sulfane sulfur may contribute to other distinct signaling pathways. Owing to its widespread biological roles and unique chemical properties, H2S is an appealing target for chemical biology approaches to elucidate its production, trafficking, and downstream function. In this context, reaction-based fluorescent probes offer a versatile set of screening tools to visualize H2S pools in living systems. Three main strategies used in molecular probe development for H2S detection include azide and nitro group reduction, nucleophilic attack, and CuS precipitation. Each of these approaches exploits the strong nucleophilicity and reducing potency of H2S to achieve selectivity over other biothiols. In addition, a variety of methods have been developed for the detection of other reactive sulfur species (RSS), including sulfite and bisulfite, as well as sulfane sulfur species and related modifications such as S-nitrosothiols. Access to this growing chemical toolbox of new molecular probes for H2S and related RSS sets the stage for applying these developing technologies to probe reactive sulfur biology in living systems.

831 citations


Book ChapterDOI
01 Jan 2016

831 citations


Journal ArticleDOI
15 Jan 2015-Cell
TL;DR: By extending guide RNAs to include effector protein recruitment sites, this work constructs modular scaffold RNAs that encode both target locus and regulatory action and applies this approach to flexibly redirect flux through a complex branched metabolic pathway in yeast.

Posted Content
TL;DR: This work proposes Seq2 SQL, a deep neural network for translating natural language questions to corresponding SQL queries, and releases WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets.
Abstract: A significant amount of the world's knowledge is stored in relational databases. However, the ability for users to retrieve facts from a database is limited due to a lack of understanding of query languages such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model leverages the structure of SQL queries to significantly reduce the output space of generated queries. Moreover, we use rewards from in-the-loop query execution over the database to learn a policy to generate unordered parts of the query, which we show are less suitable for optimization via cross entropy loss. In addition, we will publish WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia. This dataset is required to train our model and is an order of magnitude larger than comparable datasets. By applying policy-based reinforcement learning with a query execution environment to WikiSQL, our model Seq2SQL outperforms attentional sequence to sequence models, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.


Posted Content
TL;DR: In this paper, an unsupervised framework was proposed to learn a deep CNN for single view depth prediction without requiring a pre-training stage or annotated ground truth depths, by training the network in a manner analogous to an autoencoder.
Abstract: A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.

Journal ArticleDOI
07 Feb 2018-Nature
TL;DR: A simple and effective strategy to transform bulk natural wood directly into a high-performance structural material with a more than tenfold increase in strength, toughness and ballistic resistance and with greater dimensional stability is reported.
Abstract: Synthetic structural materials with exceptional mechanical performance suffer from either large weight and adverse environmental impact (for example, steels and alloys) or complex manufacturing processes and thus high cost (for example, polymer-based and biomimetic composites) Natural wood is a low-cost and abundant material and has been used for millennia as a structural material for building and furniture construction However, the mechanical performance of natural wood (its strength and toughness) is unsatisfactory for many advanced engineering structures and applications Pre-treatment with steam, heat, ammonia or cold rolling followed by densification has led to the enhanced mechanical performance of natural wood However, the existing methods result in incomplete densification and lack dimensional stability, particularly in response to humid environments, and wood treated in these ways can expand and weaken Here we report a simple and effective strategy to transform bulk natural wood directly into a high-performance structural material with a more than tenfold increase in strength, toughness and ballistic resistance and with greater dimensional stability Our two-step process involves the partial removal of lignin and hemicellulose from the natural wood via a boiling process in an aqueous mixture of NaOH and Na2SO3 followed by hot-pressing, leading to the total collapse of cell walls and the complete densification of the natural wood with highly aligned cellulose nanofibres This strategy is shown to be universally effective for various species of wood Our processed wood has a specific strength higher than that of most structural metals and alloys, making it a low-cost, high-performance, lightweight alternative

Book
22 Nov 2016
TL;DR: This special feature about ‘eco-evolutionary dynamics’ brings together biologists from empirical and theoretical backgrounds to bridge the gap between ecology and evolution and provide a series of contributions aimed at quantifying the interactions between these fundamental processes.
Abstract: Evolutionary ecologists and population biologists have recently considered that ecological and evolutionary changes are intimately linked and can occur on the same time-scale. Recent theoretical developments have shown how the feedback between ecological and evolutionary dynamics can be linked, and there are now empirical demonstrations showing that ecological change can lead to rapid evolutionary change. We also have evidence that microevolutionary change can leave an ecological signature. We are at a stage where the integration of ecology and evolution is a necessary step towards major advances in our understanding of the processes that shape and maintain biodiversity. This special feature about ‘eco-evolutionary dynamics’ brings together biologists from empirical and theoretical backgrounds to bridge the gap between ecology and evolution and provide a series of contributions aimed at quantifying the interactions between these fundamental processes.

Posted Content
TL;DR: Different novel methods based on deep learning for brain abnormality detection, recognition, and segmentation for analyzing medical images using deep learning algorithm are explored.
Abstract: This report describes my research activities in the Hasso Plattner Institute and summarizes my Ph.D. plan and several novels, end-to-end trainable approaches for analyzing medical images using deep learning algorithm. In this report, as an example, we explore different novel methods based on deep learning for brain abnormality detection, recognition, and segmentation. This report prepared for the doctoral consortium in the AIME-2017 conference.

Journal ArticleDOI
TL;DR: Among participants with pulmonary arterial hypertension who had not received previous treatment, initial combination therapy with ambrisentan and tadalafil resulted in a significantly lower risk of clinical-failure events than the risk with am debrisentan or tadalAFil monotherapy.
Abstract: The primary analysis included 500 participants; 253 were assigned to the combination-therapy group, 126 to the ambrisentan-monotherapy group, and 121 to the tadalafil-monotherapy group. A primary end-point event occurred in 18%, 34%, and 28% of the participants in these groups, respectively, and in 31% of the pooledmonotherapy group (the two monotherapy groups combined). The hazard ratio for the primary end point in the combination-therapy group versus the pooled-monotherapy group was 0.50 (95% confidence interval [CI], 0.35 to 0.72; P<0.001). At week 24, the combination-therapy group had greater reductions from baseline in N-terminal pro–brain natriuretic peptide levels than did the pooled-monotherapy group (mean change, −67.2% vs. −50.4%; P<0.001), as well as a higher percentage of pa tients with a satisfactory clinical response (39% vs. 29%; odds ratio, 1.56 [95% CI, 1.05 to 2.32]; P = 0.03) and a greater improvement in the 6-minute walk distance (median change from baseline, 48.98 m vs. 23.80 m; P<0.001). The adverse events that occurred more frequently in the combination-therapy group than in either monotherapy group included peripheral edema, headache, nasal congestion, and anemia. CONCLUSIONS Among participants with pulmonary arterial hypertension who had not received previous treatment, initial combination therapy with ambrisentan and tadalafil resulted in a significantly lower risk of clinical-failure events than the risk with ambrisentan or tadalafil monotherapy. (Funded by Gilead Sciences and GlaxoSmithKline; AMBITION ClinicalTrials.gov number, NCT01178073.)

Journal ArticleDOI
TL;DR: An alternative theoretical model for explaining the acceptance and use of information system (IS) and information technology (IT) innovations was formalized and the empirical model was empirically examined using a combination of meta-analysis and structural equation modelling techniques.
Abstract: Based on a critical review of the Unified Theory of Acceptance and Use of Technology (UTAUT), this study first formalized an alternative theoretical model for explaining the acceptance and use of information system (IS) and information technology (IT) innovations. The revised theoretical model was then empirically examined using a combination of meta-analysis and structural equation modelling (MASEM) techniques. The meta-analysis was based on 1600 observations on 21 relationships coded from 162 prior studies on IS/IT acceptance and use. The SEM analysis showed that attitude: was central to behavioural intentions and usage behaviours, partially mediated the effects of exogenous constructs on behavioural intentions, and had a direct influence on usage behaviours. A number of implications for theory and practice are derived based on the findings.

Journal ArticleDOI
09 Oct 2015-Science
TL;DR: It is demonstrated that infrared spectroscopy can be a fast and convenient characterization method with which to directly distinguish and quantify Pt single atoms from nanoparticles, and directly observe that only Pt nanoparticles show activity for carbon monoxide (CO) oxidation and water-gas shift at low temperatures, whereas Ptsingle atoms behave as spectators.
Abstract: Identification and characterization of catalytic active sites are the prerequisites for an atomic-level understanding of the catalytic mechanism and rational design of high-performance heterogeneous catalysts. Indirect evidence in recent reports suggests that platinum (Pt) single atoms are exceptionally active catalytic sites. We demonstrate that infrared spectroscopy can be a fast and convenient characterization method with which to directly distinguish and quantify Pt single atoms from nanoparticles. In addition, we directly observe that only Pt nanoparticles show activity for carbon monoxide (CO) oxidation and water-gas shift at low temperatures, whereas Pt single atoms behave as spectators. The lack of catalytic activity of Pt single atoms can be partly attributed to the strong binding of CO molecules.

Journal ArticleDOI
TL;DR: Although the gut–lung axis is only beginning to be understood, emerging evidence indicates that there is potential for manipulation of the gut microbiota in the treatment of lung diseases.
Abstract: The microbiota is vital for the development of the immune system and homeostasis. Changes in microbial composition and function, termed dysbiosis, in the respiratory tract and the gut have recently been linked to alterations in immune responses and to disease development in the lungs. In this Opinion article, we review the microbial species that are usually found in healthy gastrointestinal and respiratory tracts, their dysbiosis in disease and interactions with the gut-lung axis. Although the gut-lung axis is only beginning to be understood, emerging evidence indicates that there is potential for manipulation of the gut microbiota in the treatment of lung diseases.

Journal ArticleDOI
14 Oct 2016-Science
TL;DR: If the Integrated Assessment Models informing policy-makers assume the large-scale use of negative-emission technologies and they are not deployed or are unsuccessful at removing CO2 from the atmosphere at the levels assumed, society will be locked into a high-temperature pathway.
Abstract: In December 2015, member states of the United Nations Framework Convention on Climate Change (UNFCCC) adopted the Paris Agreement, which aims to hold the increase in the global average temperature to below 2°C and to pursue efforts to limit the temperature increase to 1.5°C. The Paris Agreement requires that anthropogenic greenhouse gas emission sources and sinks are balanced by the second half of this century. Because some nonzero sources are unavoidable, this leads to the abstract concept of “negative emissions,” the removal of carbon dioxide (CO2) from the atmosphere through technical means. The Integrated Assessment Models (IAMs) informing policy-makers assume the large-scale use of negative-emission technologies. If we rely on these and they are not deployed or are unsuccessful at removing CO2 from the atmosphere at the levels assumed, society will be locked into a high-temperature pathway.

Journal ArticleDOI
Joan B. Soriano1, Parkes J Kendrick2, Katherine R. Paulson2, Vinay Gupta2  +311 moreInstitutions (178)
TL;DR: It is shown that chronic respiratory diseases remain a leading cause of death and disability worldwide, with growth in absolute numbers but sharp declines in several age-standardised estimators since 1990.

Posted Content
TL;DR: In the absence of externalities, increasing returns to scale, and uncertainty, a perfectly competitive market system yields a Pareto optimal allocation of resources; this underlies the view that individual self-interest is compatible with society's interest.
Abstract: ECONOMICS, we all recite, deals with allocation of limited resources towards satisfaction of unlimited wants. Resources are typically identified as land, labor, and capital plus a technology that determines their transformation into consumer goods. Disparity between the available goods and services and the desired gives rise to scarcity and the question of what, how, and for whom to produce. The focus then shifts to description and evaluation of alternative resource allocation mechanisms for making the choices. The Pareto criterion, by which an allocation of resources is deemed efficient if any reallocation improving the position of some individual worsens the position of others, is a commonly employed gauge of a mechanism's performance. In the absence of externalities, increasing returns to scale, and uncertainty, a perfectly competitive market system yields a Pareto optimal allocation of resources; this underlies the view that individual self-interest is compatible with society's interest. The further conclusion that Pareto optimality may not be achieved via the market system in the presence of monopoly elements provides an economic rationale for antitrust laws. The objective of a resource allocation mechanism appears to be, according to the analysis described above, to make the best of available resources. The alternative objective of relaxing constraints through expanding the resource base or developing new technology seems to be beyond its scope. Thus, until rather recently, technical advance had been regarded, in the mainstream of economic theory, as unmotivated by the quest for profits and substantially unaffected by resource allocation. Instead, as J. Schmookler observed, technology had been viewed as a parameter like the weather, affecting the outcome of resource allocations but itself unaffected by them [84, 1965]. Evidence that technological progress has significantly contributed to growth in productivity, together with a substantial increase in research and development activity, largely financed by government and carried out by industry (see F. Machlup [51, 1962]), may have spurred reconsideration of this view. Once technical advance is regarded as an economic variable, it is natural to in-

Journal ArticleDOI
TL;DR: Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits by resampling randomized circuits according to a quasiprobability distribution.
Abstract: Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

Journal ArticleDOI
TL;DR: The CRISPR–Cas toolkit has been expanding to include single-base editing enzymes, targeting RNA and fusing inactive Cas proteins to effectors that regulate various nuclear processes, and the new advances are considerably improving the authors' understanding of biological processes and are propelling CRISpr–Cas-based tools towards clinical use in gene and cell therapies.
Abstract: The prokaryote-derived CRISPR-Cas genome editing systems have transformed our ability to manipulate, detect, image and annotate specific DNA and RNA sequences in living cells of diverse species. The ease of use and robustness of this technology have revolutionized genome editing for research ranging from fundamental science to translational medicine. Initial successes have inspired efforts to discover new systems for targeting and manipulating nucleic acids, including those from Cas9, Cas12, Cascade and Cas13 orthologues. Genome editing by CRISPR-Cas can utilize non-homologous end joining and homology-directed repair for DNA repair, as well as single-base editing enzymes. In addition to targeting DNA, CRISPR-Cas-based RNA-targeting tools are being developed for research, medicine and diagnostics. Nuclease-inactive and RNA-targeting Cas proteins have been fused to a plethora of effector proteins to regulate gene expression, epigenetic modifications and chromatin interactions. Collectively, the new advances are considerably improving our understanding of biological processes and are propelling CRISPR-Cas-based tools towards clinical use in gene and cell therapies.

Journal ArticleDOI
TL;DR: The principles behind the interface to continuous domain spatial models in the RINLA software package for R are described and the integrated nested Laplace approximation approach proposed by Rue, Martino, and Chopin (2009) is a computationally effective alternative to MCMC for Bayesian inference.
Abstract: The principles behind the interface to continuous domain spatial models in the RINLA software package for R are described. The integrated nested Laplace approximation (INLA) approach proposed by Rue, Martino, and Chopin (2009) is a computationally effective alternative to MCMC for Bayesian inference. INLA is designed for latent Gaussian models, a very wide and flexible class of models ranging from (generalized) linear mixed to spatial and spatio-temporal models. Combined with the stochastic partial differential equation approach (SPDE, Lindgren, Rue, and Lindstrom 2011), one can accommodate all kinds of geographically referenced data, including areal and geostatistical ones, as well as spatial point process data. The implementation interface covers stationary spatial models, non-stationary spatial models, and also spatio-temporal models, and is applicable in epidemiology, ecology, environmental risk assessment, as well as general geostatistics.

Book
11 Apr 2016
TL;DR: Milanovic et al. as discussed by the authors presented a new account of the dynamics that drive inequality on a global scale, drawing on vast data sets and cutting-edge research, and explained the benign and malign forces that make inequality rise and fall within and among nations.
Abstract: One of the world s leading economists of inequality, Branko Milanovic presents a bold new account of the dynamics that drive inequality on a global scale. Drawing on vast data sets and cutting-edge research, he explains the benign and malign forces that make inequality rise and fall within and among nations. He also reveals who has been helped the most by globalization, who has been held back, and what policies might tilt the balance toward economic justice."Global Inequality" takes us back hundreds of years, and as far around the world as data allow, to show that inequality moves in cycles, fueled by war and disease, technological disruption, access to education, and redistribution. The recent surge of inequality in the West has been driven by the revolution in technology, just as the Industrial Revolution drove inequality 150 years ago. But even as inequality has soared "within" nations, it has fallen dramatically "among" nations, as middle-class incomes in China and India have drawn closer to the stagnating incomes of the middle classes in the developed world. A more open migration policy would reduce global inequality even further.Both American and Chinese inequality seems well entrenched and self-reproducing, though it is difficult to predict if current trends will be derailed by emerging plutocracy, populism, or war. For those who want to understand how we got where we are, where we may be heading, and what policies might help reverse that course, Milanovic s compelling explanation is the ideal place to start."

Proceedings ArticleDOI
02 Feb 2017
TL;DR: A deep model to learn item properties and user behaviors jointly from review text, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers.
Abstract: A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets.

Journal ArticleDOI
TL;DR: The mechanisms and important role of pro-inflammatory cytokines in the pathogenesis of AD, and the ongoing drug targeting pro- inflammatory cytokine for therapeutic modulation are discussed.
Abstract: Alzheimer’s disease (AD) is a progressive neurodegenerative disorder of the brain, which is characterized by the formation of extracellular amyloid plaques (or senile plaques) and intracellular neurofibrillary tangles. However, increasing evidences demonstrated that neuroinflammatory changes, including chronic microgliosis are key pathological components of AD. Microglia, the resident immune cells of the brain, is constantly survey the microenvironment under physiological conditions. In AD, deposition of β-amyliod (Aβ) peptide initiates a spectrum of cerebral neuroinflammation mediated by activating microglia. Activated microglia may play a potentially detrimental role by eliciting the expression of pro-inflammatory cytokines such as interleukin (IL)-1β, IL-6, and tumor necrosis factor-α (TNF-α) influencing the surrounding brain tissue. Emerging studies have demonstrated that up-regulation of pro-inflammatory cytokines play multiple roles in both neurodegeneration and neuroprotection. Understanding the pro-inflammatory cytokines signaling pathways involved in the regulation of AD is crucial to the development of strategies for therapy. This review will discuss the mechanisms and important role of pro-inflammatory cytokines in the pathogenesis of AD, and the ongoing drug targeting pro-inflammatory cytokine for therapeutic modulation.