scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A significant association between respiratory infections, especially influenza, and acute myocardial infarction is found.
Abstract: Background Acute myocardial infarction can be triggered by acute respiratory infections. Previous studies have suggested an association between influenza and acute myocardial infarction, but those studies used nonspecific measures of influenza infection or study designs that were susceptible to bias. We evaluated the association between laboratory-confirmed influenza infection and acute myocardial infarction. Methods We used the self-controlled case-series design to evaluate the association between laboratory-confirmed influenza infection and hospitalization for acute myocardial infarction. We used various high-specificity laboratory methods to confirm influenza infection in respiratory specimens, and we ascertained hospitalization for acute myocardial infarction from administrative data. We defined the “risk interval” as the first 7 days after respiratory specimen collection and the “control interval” as 1 year before and 1 year after the risk interval. Results We identified 364 hospitalizations...

793 citations


Journal ArticleDOI
TL;DR: The physicochemical mechanisms underlying protein–ligand binding, including the binding kinetics, thermodynamic concepts and relationships, and binding driving forces, are introduced and rationalized.
Abstract: Molecular recognition, which is the process of biological macromolecules interacting with each other or various small molecules with a high specificity and affinity to form a specific complex, constitutes the basis of all processes in living organisms. Proteins, an important class of biological macromolecules, realize their functions through binding to themselves or other molecules. A detailed understanding of the protein–ligand interactions is therefore central to understanding biology at the molecular level. Moreover, knowledge of the mechanisms responsible for the protein-ligand recognition and binding will also facilitate the discovery, design, and development of drugs. In the present review, first, the physicochemical mechanisms underlying protein–ligand binding, including the binding kinetics, thermodynamic concepts and relationships, and binding driving forces, are introduced and rationalized. Next, three currently existing protein-ligand binding models—the “lock-and-key”, “induced fit”, and “conformational selection”—are described and their underlying thermodynamic mechanisms are discussed. Finally, the methods available for investigating protein–ligand binding affinity, including experimental and theoretical/computational approaches, are introduced, and their advantages, disadvantages, and challenges are discussed.

793 citations


Journal ArticleDOI
TL;DR: The observation of GW170817 and its electromagnetic counterpart implies that gravitational waves travel at the speed of light, with deviations smaller than a few×10−15, and it is shown that the deduced relations among operators do not introduce further tuning of the models, since they are stable under quantum corrections.
Abstract: The observation of GW170817 and its electromagnetic counterpart implies that gravitational waves travel at the speed of light, with deviations smaller than a few×10^{-15}. We discuss the consequences of this experimental result for models of dark energy and modified gravity characterized by a single scalar degree of freedom. To avoid tuning, the speed of gravitational waves must be unaffected not only for our particular cosmological solution but also for nearby solutions obtained by slightly changing the matter abundance. For this to happen, the coefficients of various operators must satisfy precise relations that we discuss both in the language of the effective field theory of dark energy and in the covariant one, for Horndeski, beyond Horndeski, and degenerate higher-order theories. The simplification is dramatic: of the three functions describing quartic and quintic beyond Horndeski theories, only one remains and reduces to a standard conformal coupling to the Ricci scalar for Horndeski theories. We show that the deduced relations among operators do not introduce further tuning of the models, since they are stable under quantum corrections.

793 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the state of the art in solid lithium and sodium ion conductors, with an emphasis on inorganic materials, is presented, where correlations between the composition, structure and conductivity of these solid electrolytes are illustrated and strategies to boost ion conductivity are proposed.
Abstract: Among the contenders in the new generation energy storage arena, all-solid-state batteries (ASSBs) have emerged as particularly promising, owing to their potential to exhibit high safety, high energy density and long cycle life. The relatively low conductivity of most solid electrolytes and the often sluggish charge transfer kinetics at the interface between solid electrolyte and electrode layers are considered to be amongst the major challenges facing ASSBs. This review presents an overview of the state of the art in solid lithium and sodium ion conductors, with an emphasis on inorganic materials. The correlations between the composition, structure and conductivity of these solid electrolytes are illustrated and strategies to boost ion conductivity are proposed. In particular, the high grain boundary resistance of solid oxide electrolytes is identified as a challenge. Critical issues of solid electrolytes beyond ion conductivity are also discussed with respect to their potential problems for practical applications. The chemical and electrochemical stabilities of solid electrolytes are discussed, as are chemo-mechanical effects which have been overlooked to some extent. Furthermore, strategies to improve the practical performance of ASSBs, including optimizing the interface between solid electrolytes and electrode materials to improve stability and lower charge transfer resistance are also suggested.

793 citations


Journal ArticleDOI
15 Jul 2019
TL;DR: This paper will provide an overview of applications where deep learning is used at the network edge, discuss various approaches for quickly executing deep learning inference across a combination of end devices, edge servers, and the cloud, and describe the methods for training deep learning models across multiple edge devices.
Abstract: Deep learning is currently widely used in a variety of applications, including computer vision and natural language processing. End devices, such as smartphones and Internet-of-Things sensors, are generating data that need to be analyzed in real time using deep learning or used to train deep learning models. However, deep learning inference and training require substantial computation resources to run quickly. Edge computing, where a fine mesh of compute nodes are placed close to end devices, is a viable way to meet the high computation and low-latency requirements of deep learning on edge devices and also provides additional benefits in terms of privacy, bandwidth efficiency, and scalability. This paper aims to provide a comprehensive review of the current state of the art at the intersection of deep learning and edge computing. Specifically, it will provide an overview of applications where deep learning is used at the network edge, discuss various approaches for quickly executing deep learning inference across a combination of end devices, edge servers, and the cloud, and describe the methods for training deep learning models across multiple edge devices. It will also discuss open challenges in terms of systems performance, network technologies and management, benchmarks, and privacy. The reader will take away the following concepts from this paper: understanding scenarios where deep learning at the network edge can be useful, understanding common techniques for speeding up deep learning inference and performing distributed training on edge devices, and understanding recent trends and opportunities.

793 citations


Journal ArticleDOI
TL;DR: In the 1980s and 1990s, cancer drug resistance was a popular subdiscipline that permeated many therapeutic areas of oncology research and numerous meetings and symposia were held.
Abstract: The importance of thiol-mediated detoxification of anticancer drugs that produce toxic electrophiles has been of considerable interest to many investigators. Glutathione and glutathione S-transferases (GST) are the focus of much attention in characterizing drug resistant cells. However, ambiguous and sometimes conflicting data have complicated the field. This article attempts to clarify some of the confusion. The following observations are well established: (a) tumors express high levels of GST, especially GST psi, although the isozyme components vary quite markedly between tissues and the isozymes are inducible; (b) nitrogen mustards are good substrates for the GST alpha family of isozymes which are frequently overexpressed in cells with acquired resistance to these drugs; (c) most drugs of the multidrug-resistant phenotype have not been shown to be GST substrates and although GST psi is frequently overexpressed in multidrug-resistant cells, most indications are that this is an accompaniment to, rather than a cause of, the resistant phenotype; (d) transfection of GST complementary DNAs has produced some lines with increased resistance to alkylating agents. Most studies of the relationships between GST and resistance have overlooked the potential importance of other enzymes involved in the maintenance of cellular glutathione homeostasis, and this has complicated data interpretation. Translational research aimed at applying our knowledge of glutathione pathways has produced preclinical and clinical testing of some glutathione and GST inhibitors, with some encouraging preliminary results. In brief, GSTs are important determinants of drug response for some, not all, anticancer drugs. Caution should be encouraged in assessing cause/effect relationships between GST overexpression and resistance mechanisms.

793 citations


Proceedings ArticleDOI
08 Aug 2019
TL;DR: The Large Vocabulary Instance Segmentation (LVIS) dataset as discussed by the authors is a large-scale dataset for instance segmentation, which contains 2.2 million high-quality segmentation masks for over 1000 entry-level object categories in 164k images.
Abstract: Progress on object detection is enabled by datasets that focus the research community’s attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced ‘el-vis’): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect 2.2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org.

793 citations


Journal ArticleDOI
TL;DR: In this paper, a model of thriving through relationships is presented to provide a theoretical foundation for identifying the specific interpersonal processes that underlie the effects of close relationships on thriving, highlighting two life contexts through which people may potentially thrive (coping successfully with life's adversities and actively pursuing life opportunities for growth and development).
Abstract: Close and caring relationships are undeniably linked to health and well-being at all stages in the life span. Yet the specific pathways through which close relationships promote optimal well-being are not well understood. In this article, we present a model of thriving through relationships to provide a theoretical foundation for identifying the specific interpersonal processes that underlie the effects of close relationships on thriving. This model highlights two life contexts through which people may potentially thrive (coping successfully with life's adversities and actively pursuing life opportunities for growth and development), it proposes two relational support functions that are fundamental to the experience of thriving in each life context, and it identifies mediators through which relational support is likely to have long-term effects on thriving. This perspective highlights the need for researchers to take a new look at social support by conceptualizing it as an interpersonal process with a focus on thriving.

792 citations


Journal ArticleDOI
TL;DR: This study aimed to investigate the incidence and mortality of breast cancer in the world using age-specific incidenceand mortality rates for the year 2012 acquired from the global cancer project (GLOBOCAN 2012) as well as data about incidence andortality of the cancer based on national reports.
Abstract: Breast cancer is the most common malignancy in women around the world. Information on the incidence and mortality of breast cancer is essential for planning health measures. This study aimed to investigate the incidence and mortality of breast cancer in the world using age-specific incidence and mortality rates for the year 2012 acquired from the global cancer project (GLOBOCAN 2012) as well as data about incidence and mortality of the cancer based on national reports. It was estimated that 1,671,149 new cases of breast cancer were identified and 521,907 cases of deaths due to breast cancer occurred in the world in 2012. According to GLOBOCAN, it is the most common cancer in women, accounting for 25.1% of all cancers. Breast cancer incidence in developed countries is higher, while relative mortality is greatest in less developed countries. Education of women is suggested in all countries for early detection and treatment. Plans for the control and prevention of this cancer must be a high priority for health policy makers; also, it is necessary to increase awareness of risk factors and early detection in less developed countries.

792 citations


Journal ArticleDOI
TL;DR: This time-frame documents an early phylogenetic proliferation that led to the establishment of major angiosperm lineages, and the origin of over half of extant families, in the Cretaceous.
Abstract: The establishment of modern terrestrial life is indissociable from angiosperm evolution. While available molecular clock estimates of angiosperm age range from the Paleozoic to the Late Cretaceous, the fossil record is consistent with angiosperm diversification in the Early Cretaceous. The time-frame of angiosperm evolution is here estimated using a sample representing 87% of families and sequences of five plastid and nuclear markers, implementing penalized likelihood and Bayesian relaxed clocks. A literature-based review of the palaeontological record yielded calibrations for 137 phylogenetic nodes. The angiosperm crown age was bound within a confidence interval calculated with a method that considers the fossil record of the group. An Early Cretaceous crown angiosperm age was estimated with high confidence. Magnoliidae, Monocotyledoneae and Eudicotyledoneae diversified synchronously 135-130 million yr ago (Ma); Pentapetalae is 126-121 Ma; and Rosidae (123-115 Ma) preceded Asteridae (119-110 Ma). Family stem ages are continuously distributed between c. 140 and 20 Ma. This time-frame documents an early phylogenetic proliferation that led to the establishment of major angiosperm lineages, and the origin of over half of extant families, in the Cretaceous. While substantial amounts of angiosperm morphological and functional diversity have deep evolutionary roots, extant species richness was probably acquired later.

792 citations


Journal ArticleDOI
25 Jan 2016-PLOS ONE
TL;DR: The higher BP in SSA is maintained over decades, suggesting limited efficacy of prevention strategies in such group in Europe, and the lower BP in Muslim populations suggests that yet untapped lifestyle and behavioral habits may reveal advantages towards the development of hypertension.
Abstract: Background: People of Sub Saharan Africa (SSA) and South Asians(SA) ethnic minorities living in Europe have higher risk of stroke than native Europeans(EU). Study objective is to provide an assessment of gender specific absolute differences in office systolic(SBP) and diastolic(DBP) blood pressure(BP) levels between SSA, SA, and EU. Methods and Findings: We performed a systematic review and meta-analysis of observational studies conducted in Europe that examined BP in non-selected adult SSA, SA and EU subjects. Medline, PubMed, Embase, Web of Science, and Scopus were searched from their inception through January 31st 2015, for relevant articles. Outcome measures were mean SBP and DBP differences between minorities and EU, using a random effects model and tested for heterogeneity. Twenty-one studies involving 9,070 SSA, 18,421 SA, and 130,380 EU were included. Compared with EU, SSA had higher values of both SBP (3.38 mmHg, 95% CI 1.28 to 5.48 mmHg; and 6.00 mmHg, 95% CI 2.22 to 9.78 in men and women respectively) and DBP (3.29 mmHg, 95% CI 1.80 to 4.78; 5.35 mmHg, 95% CI 3.04 to 7.66). SA had lower SBP than EU(-4.57 mmHg, 95% CI -6.20 to -2.93; -2.97 mmHg, 95% CI -5.45 to -0.49) but similar DBP values. Meta-analysis by subgroup showed that SA originating from countries where Islam is the main religion had lower SBP and DBP values than EU. In multivariate meta-regression analyses, SBP difference between minorities and EU populations, was influenced by panethnicity and diabetes prevalence. Conclusions: 1) The higher BP in SSA is maintained over decades, suggesting limited efficacy of prevention strategies in such group in Europe;2) The lower BP in Muslim populations suggests that yet untapped lifestyle and behavioral habits may reveal advantages towards the development of hypertension;3) The additive effect of diabetes, emphasizes the need of new strategies for the control of hypertension in groups at high prevalence of diabetes.

Journal ArticleDOI
TL;DR: Parsimony is wished to bring parsimony to a field that includes interventions with different names but common features thus improving understanding and choice-making among families, service providers and referring agencies.
Abstract: Earlier autism diagnosis, the importance of early intervention, and development of specific interventions for young children have contributed to the emergence of similar, empirically supported, autism interventions that represent the merging of applied behavioral and developmental sciences. “Naturalistic Developmental Behavioral Interventions (NDBI)” are implemented in natural settings, involve shared control between child and therapist, utilize natural contingencies, and use a variety of behavioral strategies to teach developmentally appropriate and prerequisite skills. We describe the development of NDBIs, their theoretical bases, empirical support, requisite characteristics, common features, and suggest future research needs. We wish to bring parsimony to a field that includes interventions with different names but common features thus improving understanding and choice-making among families, service providers and referring agencies.

Journal ArticleDOI
TL;DR: In this article, a review of the current status of technology deployment and recommendations for future remediation research is presented. And the authors also elucidate and compare the available technologies that are currently being applied for remediation of heavy metal(loid) contaminated soils, as well as the economic aspect of soil remediation for different techniques.

Journal ArticleDOI
TL;DR: Social disconnectedness predicted higher subsequent perceived isolation, which in turn predicted higher depression symptoms and anxiety symptoms, and the reverse pathways were statistically supported as well, suggesting bi-directional influences.
Abstract: Summary Background Research indicates that social isolation and loneliness increase the risk of mental disorders, but less is known about the distinct contributions of different aspects of isolation. We aimed to distinguish the pathways through which social disconnectedness (eg, small social network, infrequent social interaction) and perceptions of social isolation (eg, loneliness, perceived lack of support) contribute to anxiety and depression symptom severity in community-residing older adults aged 57–85 years at baseline. Methods We did a longitudinal mediation analysis with data from the National Social Life, Health, and Aging Project (NSHAP). The study included individuals from the USA born between 1920 and 1947. Validated measures on social disconnectedness, perceived isolation, and depression and anxiety symptoms were used. Structural equation modelling was used to construct complete longitudinal path models. Findings Using data from 3005 adults aged 57–85 years, we identified two significant longitudinal mediation patterns with symptoms of depression, and two with anxiety symptoms. Overall, social disconnectedness predicted higher subsequent perceived isolation (β=0·09; p Interpretation Social network structure and function are strongly intertwined with anxiety and depression symptoms in the general population of older adults. Public health initiatives could reduce perceived isolation by facilitating social network integration and participation in community activities, thereby protecting against the development of affective disorders. Funding Nordea-fonden.

Journal ArticleDOI
TL;DR: A systematic review of peer-reviewed and grey literature on how monkeypox epidemiology has evolved, with particular emphasis on the number of confirmed, probable, and/or possible cases, age at presentation, mortality, and geographical spread was conducted in this article .
Abstract: Monkeypox, a zoonotic disease caused by an orthopoxvirus, results in a smallpox-like disease in humans. Since monkeypox in humans was initially diagnosed in 1970 in the Democratic Republic of the Congo (DRC), it has spread to other regions of Africa (primarily West and Central), and cases outside Africa have emerged in recent years. We conducted a systematic review of peer-reviewed and grey literature on how monkeypox epidemiology has evolved, with particular emphasis on the number of confirmed, probable, and/or possible cases, age at presentation, mortality, and geographical spread. The review is registered with PROSPERO (CRD42020208269). We identified 48 peer-reviewed articles and 18 grey literature sources for data extraction. The number of human monkeypox cases has been on the rise since the 1970s, with the most dramatic increases occurring in the DRC. The median age at presentation has increased from 4 (1970s) to 21 years (2010–2019). There was an overall case fatality rate of 8.7%, with a significant difference between clades—Central African 10.6% (95% CI: 8.4%– 13.3%) vs. West African 3.6% (95% CI: 1.7%– 6.8%). Since 2003, import- and travel-related spread outside of Africa has occasionally resulted in outbreaks. Interactions/activities with infected animals or individuals are risk behaviors associated with acquiring monkeypox. Our review shows an escalation of monkeypox cases, especially in the highly endemic DRC, a spread to other countries, and a growing median age from young children to young adults. These findings may be related to the cessation of smallpox vaccination, which provided some cross-protection against monkeypox, leading to increased human-to-human transmission. The appearance of outbreaks beyond Africa highlights the global relevance of the disease. Increased surveillance and detection of monkeypox cases are essential tools for understanding the continuously changing epidemiology of this resurging disease.

Proceedings Article
09 Jul 2016
TL;DR: A novel method that learns continuous representations of microblog events for identifying rumors based on recurrent neural networks that detects rumors more quickly and accurately than existing techniques, including the leading online rumor debunking services.
Abstract: Microblogging platforms are an ideal place for spreading rumors and automatically debunking rumors is a crucial problem. To detect rumors, existing approaches have relied on hand-crafted features for employing machine learning algorithms that require daunting manual effort. Upon facing a dubious claim, people dispute its truthfulness by posting various cues over time, which generates long-distance dependencies of evidence. This paper presents a novel method that learns continuous representations of microblog events for identifying rumors. The proposed model is based on recurrent neural networks (RNN) for learning the hidden representations that capture the variation of contextual information of relevant posts over time. Experimental results on datasets from two real-world microblog platforms demonstrate that (1) the RNN method outperforms state-of-the-art rumor detection models that use hand-crafted features; (2) performance of the RNN-based algorithm is further improved via sophisticated recurrent units and extra hidden layers; (3) RNN-based method detects rumors more quickly and accurately than existing techniques, including the leading online rumor debunking services.

Posted Content
TL;DR: It is shown that a surprisingly simple model, and associated design choices, lead to superior predictions, and together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods.
Abstract: Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.

Journal ArticleDOI
TL;DR: The use of balanced crystalloids for intravenous fluid administration resulted in a lower rate of the composite outcome of death from any cause, new renal‐replacement therapy, or persistent renal dysfunction than the use of saline among critically ill adults.
Abstract: Background Both balanced crystalloids and saline are used for intravenous fluid administration in critically ill adults, but it is not known which results in better clinical outcomes. Methods In a pragmatic, cluster-randomized, multiple-crossover trial conducted in five intensive care units at an academic center, we assigned 15,802 adults to receive saline (0.9% sodium chloride) or balanced crystalloids (lactated Ringer’s solution or Plasma-Lyte A) according to the randomization of the unit to which they were admitted. The primary outcome was a major adverse kidney event within 30 days — a composite of death from any cause, new renal-replacement therapy, or persistent renal dysfunction (defined as an elevation of the creatinine level to ≥200% of baseline) — all censored at hospital discharge or 30 days, whichever occurred first. Results Among the 7942 patients in the balanced-crystalloids group, 1139 (14.3%) had a major adverse kidney event, as compared with 1211 of 7860 patients (15.4%) in the s...

Journal ArticleDOI
14 Dec 2018-Science
TL;DR: This work integrated genotypes and RNA sequencing in brain samples from 1695 individuals with autism spectrum disorder, schizophrenia, and bipolar disorder, as well as controls to identify causal drivers and define a mechanistic basis for the composite activity of genetic risk variants.
Abstract: Most genetic risk for psychiatric disease lies in regulatory regions, implicating pathogenic dysregulation of gene expression and splicing. However, comprehensive assessments of transcriptomic organization in diseased brains are limited. In this work, we integrated genotypes and RNA sequencing in brain samples from 1695 individuals with autism spectrum disorder (ASD), schizophrenia, and bipolar disorder, as well as controls. More than 25% of the transcriptome exhibits differential splicing or expression, with isoform-level changes capturing the largest disease effects and genetic enrichments. Coexpression networks isolate disease-specific neuronal alterations, as well as microglial, astrocyte, and interferon-response modules defining previously unidentified neural-immune mechanisms. We integrated genetic and genomic data to perform a transcriptome-wide association study, prioritizing disease loci likely mediated by cis effects on brain expression. This transcriptome-wide characterization of the molecular pathology across three major psychiatric disorders provides a comprehensive resource for mechanistic insight and therapeutic development.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this article, a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training is proposed.
Abstract: Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.

Journal ArticleDOI
TL;DR: The Planck 2015 likelihoods as mentioned in this paper describe the 2-point correlations of CMB data, using the hybrid approach employed previously: pixel-based at the low layer and a Gaussian approximation to the distribution of spectra at the higher layer.
Abstract: This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlations of CMB data, using the hybrid approach employed previously: pixel-based at $\ell<30$ and a Gaussian approximation to the distribution of spectra at higher $\ell$. The main improvements are the use of more and better processed data and of Planck polarization data, and more detailed foreground and instrumental models, allowing further checks and enhanced immunity to systematics. Progress in foreground modelling enables a larger sky fraction. Improvements in processing and instrumental models further reduce uncertainties. For temperature, we perform an analysis of end-to-end instrumental simulations fed into the data processing pipeline; this does not reveal biases from residual instrumental systematics. The $\Lambda$CDM cosmological model continues to offer a very good fit to Planck data. The slope of primordial scalar fluctuations, $n_s$, is confirmed smaller than unity at more than 5{\sigma} from Planck alone. We further validate robustness against specific extensions to the baseline cosmology. E.g., the effective number of neutrino species remains compatible with the canonical value of 3.046. This first detailed analysis of Planck polarization concentrates on E modes. At low $\ell$ we use temperature at all frequencies and a subset of polarization. The frequency range improves CMB-foreground separation. Within the baseline model this requires a reionization optical depth $\tau=0.078\pm0.019$, significantly lower than without high-frequency data for explicit dust monitoring. At high $\ell$ we detect residual errors in E, typically O($\mu$K$^2$); we recommend temperature alone as the high-$\ell$ baseline. Nevertheless, Planck high-$\ell$ polarization allows a separate determination of $\Lambda$CDM parameters consistent with those from temperature alone.

PatentDOI
TL;DR: In this article, a pneumatically powered, fully untethered mobile soft robot is described, and composites consisting of silicone elastomer, polyaramid fabric, and hollow glass microspheres are used to fabricate a sufficiently large soft robot to carry the miniature air compressors, battery, valves and controller needed for autonomous operation.
Abstract: A pneumatically powered, fully untethered mobile soft robot is described. Composites consisting of silicone elastomer, polyaramid fabric, and hollow glass microspheres were used to fabricate a sufficiently large soft robot to carry the miniature air compressors, battery, valves, and controller needed for autonomous operation. Fabrication techniques were developed to mold a 0.65 meter long soft body with modified Pneumatic network actuators capable of operating at the elevated pressures (up to 138 kPa) required to actuate the legs of the robot and hold payloads of up to 8 kg. The soft robot is safe to handle, and its silicone body is innately resilient to a variety of adverse environmental conditions including snow, puddles of water, direct (albeit limited) exposure to flames, and the crushing force of being run over by an automobile.

Journal ArticleDOI
TL;DR: Evidence-based practical recommendations are provided for rational quantification of rate of force development in both laboratory and clinical settings and various methodological considerations inherent to its evaluation are discussed.
Abstract: The evaluation of rate of force development during rapid contractions has recently become quite popular for characterising explosive strength of athletes, elderly individuals and patients. The main aims of this narrative review are to describe the neuromuscular determinants of rate of force development and to discuss various methodological considerations inherent to its evaluation for research and clinical purposes. Rate of force development (1) seems to be mainly determined by the capacity to produce maximal voluntary activation in the early phase of an explosive contraction (first 50–75 ms), particularly as a result of increased motor unit discharge rate; (2) can be improved by both explosive-type and heavy-resistance strength training in different subject populations, mainly through an improvement in rapid muscle activation; (3) is quite difficult to evaluate in a valid and reliable way. Therefore, we provide evidence-based practical recommendations for rational quantification of rate of force development in both laboratory and clinical settings.

Journal ArticleDOI
TL;DR: Optimal searches in systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar as a minimum requirement to guarantee adequate and efficient coverage.
Abstract: Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases needed to conduct efficient searches in systematic reviews and whether the current practice in published reviews is appropriate. While previous studies determined the coverage of databases, we analyzed the actual retrieval from the original searches for systematic reviews. Since May 2013, the first author prospectively recorded results from systematic review searches that he performed at his institution. PubMed was used to identify systematic reviews published using our search strategy results. For each published systematic review, we extracted the references of the included studies. Using the prospectively recorded results and the studies included in the publications, we calculated recall, precision, and number needed to read for single databases and databases in combination. We assessed the frequency at which databases and combinations would achieve varying levels of recall (i.e., 95%). For a sample of 200 recently published systematic reviews, we calculated how many had used enough databases to ensure 95% recall. A total of 58 published systematic reviews were included, totaling 1746 relevant references identified by our database searches, while 84 included references had been retrieved by other search methods. Sixteen percent of the included references (291 articles) were only found in a single database; Embase produced the most unique references (n = 132). The combination of Embase, MEDLINE, Web of Science Core Collection, and Google Scholar performed best, achieving an overall recall of 98.3 and 100% recall in 72% of systematic reviews. We estimate that 60% of published systematic reviews do not retrieve 95% of all available relevant references as many fail to search important databases. Other specialized databases, such as CINAHL or PsycINFO, add unique references to some reviews where the topic of the review is related to the focus of the database. Optimal searches in systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar as a minimum requirement to guarantee adequate and efficient coverage.

Journal ArticleDOI
TL;DR: Convalescent plasma may reduce mortality and appears safe, and should be studied within the context of a well-designed clinical trial or other formal evaluation, including for treatment of Middle East respiratory syndrome coronavirus CoV infection.
Abstract: Background: Administration of convalescent plasma, serum, or hyperimmune immunoglobulin may be of clinical benefit for treatment of severe acute respiratory infections (SARIs) of viral etiology. We conducted a systematic review and exploratory meta-analysis to assess the overall evidence. Methods: Healthcare databases and sources of grey literature were searched in July 2013. All records were screened against the protocol eligibility criteria, using a 3-stage process. Data extraction and risk of bias assessments were undertaken. Results: We identified 32 studies of SARS coronavirus infection and severe influenza. Narrative analyses revealed consistent evidence for a reduction in mortality, especially when convalescent plasma is administered early after symptom onset. Exploratory post hoc meta-analysis showed a statistically significant reduction in the pooled odds of mortality following treatment, compared with placebo or no therapy (odds ratio, 0.25; 95% confidence interval, .14–.45; I(2) = 0%). Studies were commonly of low or very low quality, lacked control groups, and at moderate or high risk of bias. Sources of clinical and methodological heterogeneity were identified. Conclusions: Convalescent plasma may reduce mortality and appears safe. This therapy should be studied within the context of a well-designed clinical trial or other formal evaluation, including for treatment of Middle East respiratory syndrome coronavirus CoV infection.

Proceedings Article
26 Sep 2016
TL;DR: The pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank while using far fewer parameters than a standard softmax LSTM and the freely available WikiText corpus is introduced.
Abstract: Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. Our pointer sentinel-LSTM model achieves state of the art language modeling performance on the Penn Treebank (70.9 perplexity) while using far fewer parameters than a standard softmax LSTM. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and larger corpora we also introduce the freely available WikiText corpus.

Journal ArticleDOI
TL;DR: In this article, the authors used a shift-inverse exact diagonalization approach to identify the edge of the many-body localization edge in a random field Heisenberg chain.
Abstract: The authors study the phenomena of many-body localization in a random field Heisenberg chain. In this paper the authors use a shift-inverse exact diagonalization approach that allows them to study the mid-spectrum spectral properties of the model for system sizes of up to N=22. This has allow the authors to identify the many-body localization edge.

Journal ArticleDOI
13 May 2016-Science
TL;DR: It is reported that indium tin oxide can acquire an ultrafast and large intensity-dependent refractive index in the region of the spectrum where the real part of its permittivity vanishes, and offers the possibility of designing material structures with large ultrafast nonlinearity for applications in nanophotonics.
Abstract: Nonlinear optical phenomena are crucial for a broad range of applications, such as microscopy, all-optical data processing, and quantum information. However, materials usually exhibit a weak optical nonlinearity even under intense coherent illumination. We report that indium tin oxide can acquire an ultrafast and large intensity-dependent refractive index in the region of the spectrum where the real part of its permittivity vanishes. We observe a change in the real part of the refractive index of 0.72 ± 0.025, corresponding to 170% of the linear refractive index. This change in refractive index is reversible with a recovery time of about 360 femtoseconds. Our results offer the possibility of designing material structures with large ultrafast nonlinearity for applications in nanophotonics.

Journal ArticleDOI
TL;DR: In this paper, a gated multilayer black phosphorus photodetector integrated on a silicon photonic waveguide operating in the telecom band is demonstrated with intrinsic responsivity up to 135
Abstract: A gated multilayer black phosphorus photodetector integrated on a silicon photonic waveguide operating in the telecom band is demonstrated with intrinsic responsivity up to 135 mA W−1 and 657 mA W−1 in 11.5-nm- and 100-nm-thick devices, respectively.

Journal ArticleDOI
TL;DR: The use of CTA in addition to standard care in patients with stable chest pain resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years than standard care alone, without resulting in a significant higher rate of coronary angiography or coronary revascularization.
Abstract: Background: Although coronary computed tomographic angiography (CTA) improves diagnostic certainty in the assessment of patients with stable chest pain, its effect on 5-year clinical outcomes is unknown. Methods: In an open-label, multicenter, parallel-group trial, we randomly assigned 4146 patients with stable chest pain who had been referred to a cardiology clinic for evaluation to standard care plus CTA (2073 patients) or to standard care alone (2073 patients). Investigations, treatments, and clinical outcomes were assessed over 3 to 7 years of follow-up. The primary end point was death from coronary heart disease or nonfatal myocardial infarction at 5 years. Results: The median duration of follow-up was 4.8 years, which yielded 20,254 patient-years of follow-up. The 5-year rate of the primary end point was lower in the CTA group than in the standard-care group (2.3% [48 patients] vs. 3.9% [81 patients]; hazard ratio, 0.59; 95% confidence interval [CI], 0.41 to 0.84; P=0.004). Although the rates of invasive coronary angiography and coronary revascularization were higher in the CTA group than in the standard-care group in the first few months of follow-up, overall rates were similar at 5 years: invasive coronary angiography was performed in 491 patients in the CTA group and in 502 patients in the standard-care group (hazard ratio, 1.00; 95% CI, 0.88 to 1.13), and coronary revascularization was performed in 279 patients in the CTA group and in 267 in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91 to 1.27). However, more preventive therapies were initiated in patients in the CTA group (odds ratio, 1.40; 95% CI, 1.19 to 1.65), as were more antianginal therapies (odds ratio, 1.27; 95% CI, 1.05 to 1.54). There were no significant between-group differences in the rates of cardiovascular or noncardiovascular deaths or deaths from any cause. Conclusions: In this trial, the use of CTA in addition to standard care in patients with stable chest pain resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years than standard care alone, without resulting in a significantly higher rate of coronary angiography or coronary revascularization. (Funded by the Scottish Government Chief Scientist Office and others; SCOT-HEART ClinicalTrials.gov number, NCT01149590).