scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this article, a single photon with near-unity indistinguishability was generated from quantum dots in electrically controlled cavity structures, which allowed for efficient photon collection while application of an electrical bias cancels charge noise effects.
Abstract: A single photon with near-unity indistinguishability is generated from quantum dots in electrically controlled cavity structures. The cavity allows for efficient photon collection while application of an electrical bias cancels charge noise effects.

1,049 citations



Journal ArticleDOI
TL;DR: In this article, the authors show that planar perovskite solar cells using TiO2 are inherently limited due to conduction band misalignment and demonstrate, with a variety of characterization techniques, for the first time that SnO2 achieves a barrier-free energetic configuration, obtaining almost hysteresis-free power conversion efficiencies (PCEs).
Abstract: The simplification of perovskite solar cells (PSCs), by replacing the mesoporous electron selective layer (ESL) with a planar one, is advantageous for large-scale manufacturing. PSCs with a planar TiO2 ESL have been demonstrated, but these exhibit unstabilized power conversion efficiencies (PCEs). Herein we show that planar PSCs using TiO2 are inherently limited due to conduction band misalignment and demonstrate, with a variety of characterization techniques, for the first time that SnO2 achieves a barrier-free energetic configuration, obtaining almost hysteresis-free PCEs of over 18% with record high voltages of up to 1.19 V.

1,049 citations


Journal ArticleDOI
20 Jun 2017-Trials
TL;DR: A four-step process to develop a core outcome set is recommended, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area.
Abstract: The selection of appropriate outcomes is crucial when designing clinical trials in order to compare the effects of different interventions directly. For the findings to influence policy and practice, the outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. It is now widely acknowledged that insufficient attention has been paid to the choice of outcomes measured in clinical trials. Researchers are increasingly addressing this issue through the development and use of a core outcome set, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area. Accumulating work in this area has identified the need for guidance on the development, implementation, evaluation and updating of core outcome sets. This Handbook, developed by the COMET Initiative, brings together current thinking and methodological research regarding those issues. We recommend a four-step process to develop a core outcome set. The aim is to update the contents of the Handbook as further research is identified.

1,048 citations


Posted Content
TL;DR: This paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain, and develops a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction.
Abstract: Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neural networks (CNNs). However, most existing methods dedicate to developing more sophisticated attention modules for achieving better performance, which inevitably increase model complexity. To overcome the paradox of performance and complexity trade-off, this paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain. By dissecting the channel attention module in SENet, we empirically show avoiding dimensionality reduction is important for learning channel attention, and appropriate cross-channel interaction can preserve performance while significantly decreasing model complexity. Therefore, we propose a local cross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via $1D$ convolution. Furthermore, we develop a method to adaptively select kernel size of $1D$ convolution, determining coverage of local cross-channel interaction. The proposed ECA module is efficient yet effective, e.g., the parameters and computations of our modules against backbone of ResNet50 are 80 vs. 24.37M and 4.7e-4 GFLOPs vs. 3.86 GFLOPs, respectively, and the performance boost is more than 2% in terms of Top-1 accuracy. We extensively evaluate our ECA module on image classification, object detection and instance segmentation with backbones of ResNets and MobileNetV2. The experimental results show our module is more efficient while performing favorably against its counterparts.

1,048 citations


Posted Content
TL;DR: In this paper, a new neural network module called EdgeConv is proposed for CNN-based high-level tasks on point clouds including classification and segmentation, which is differentiable and can be plugged into existing architectures.
Abstract: Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS.

1,048 citations


Journal ArticleDOI
TL;DR: Phylogenetic analysis suggests that the ZikV identified belongs to the Asian clade, the first report of ZIKV infection in Brazil, and the result was confirmed by DNA sequencing.
Abstract: In the early 2015, several cases of patients presenting symptoms of mild fever, rash, conjunctivitis and arthralgia were reported in the northeastern Brazil. Although all patients lived in a dengue endemic area, molecular and serological diagnosis for dengue resulted negative. Chikungunya virus infection was also discarded. Subsequently, Zika virus (ZIKV) was detected by reverse transcription-polymerase chain reaction from the sera of eight patients and the result was confirmed by DNA sequencing. Phylogenetic analysis suggests that the ZIKV identified belongs to the Asian clade. This is the first report of ZIKV infection in Brazil.

1,047 citations


Journal ArticleDOI
Marielle Saunois1, Ann R. Stavert2, Ben Poulter3, Philippe Bousquet1, Josep G. Canadell2, Robert B. Jackson4, Peter A. Raymond5, Edward J. Dlugokencky6, Sander Houweling7, Sander Houweling8, Prabir K. Patra9, Prabir K. Patra10, Philippe Ciais1, Vivek K. Arora, David Bastviken11, Peter Bergamaschi, Donald R. Blake12, Gordon Brailsford13, Lori Bruhwiler6, Kimberly M. Carlson14, Mark Carrol3, Simona Castaldi15, Naveen Chandra10, Cyril Crevoisier16, Patrick M. Crill17, Kristofer R. Covey18, Charles L. Curry19, Giuseppe Etiope20, Giuseppe Etiope21, Christian Frankenberg22, Nicola Gedney23, Michaela I. Hegglin24, Lena Höglund-Isaksson25, Gustaf Hugelius17, Misa Ishizawa26, Akihiko Ito26, Greet Janssens-Maenhout, Katherine M. Jensen27, Fortunat Joos28, Thomas Kleinen29, Paul B. Krummel2, Ray L. Langenfelds2, Goulven Gildas Laruelle, Licheng Liu30, Toshinobu Machida26, Shamil Maksyutov26, Kyle C. McDonald27, Joe McNorton31, Paul A. Miller32, Joe R. Melton, Isamu Morino26, Jurek Müller28, Fabiola Murguia-Flores33, Vaishali Naik34, Yosuke Niwa26, Sergio Noce, Simon O'Doherty33, Robert J. Parker35, Changhui Peng36, Shushi Peng37, Glen P. Peters, Catherine Prigent, Ronald G. Prinn38, Michel Ramonet1, Pierre Regnier, William J. Riley39, Judith A. Rosentreter40, Arjo Segers, Isobel J. Simpson12, Hao Shi41, Steven J. Smith42, L. Paul Steele2, Brett F. Thornton17, Hanqin Tian41, Yasunori Tohjima26, Francesco N. Tubiello43, Aki Tsuruta44, Nicolas Viovy1, Apostolos Voulgarakis45, Apostolos Voulgarakis46, Thomas Weber47, Michiel van Weele48, Guido R. van der Werf7, Ray F. Weiss49, Doug Worthy, Debra Wunch50, Yi Yin22, Yi Yin1, Yukio Yoshida26, Weiya Zhang32, Zhen Zhang51, Yuanhong Zhao1, Bo Zheng1, Qing Zhu39, Qiuan Zhu52, Qianlai Zhuang30 
Université Paris-Saclay1, Commonwealth Scientific and Industrial Research Organisation2, Goddard Space Flight Center3, Stanford University4, Yale University5, National Oceanic and Atmospheric Administration6, VU University Amsterdam7, Netherlands Institute for Space Research8, Chiba University9, Japan Agency for Marine-Earth Science and Technology10, Linköping University11, University of California, Irvine12, National Institute of Water and Atmospheric Research13, New York University14, Seconda Università degli Studi di Napoli15, École Polytechnique16, Stockholm University17, Skidmore College18, University of Victoria19, National Institute of Geophysics and Volcanology20, Babeș-Bolyai University21, California Institute of Technology22, Met Office23, University of Reading24, International Institute for Applied Systems Analysis25, National Institute for Environmental Studies26, City University of New York27, University of Bern28, Max Planck Society29, Purdue University30, European Centre for Medium-Range Weather Forecasts31, Lund University32, University of Bristol33, Geophysical Fluid Dynamics Laboratory34, University of Leicester35, Université du Québec à Montréal36, Peking University37, Massachusetts Institute of Technology38, Lawrence Berkeley National Laboratory39, Southern Cross University40, Auburn University41, Joint Global Change Research Institute42, Food and Agriculture Organization43, Finnish Meteorological Institute44, Technical University of Crete45, Imperial College London46, University of Rochester47, Royal Netherlands Meteorological Institute48, Scripps Institution of Oceanography49, University of Toronto50, University of Maryland, College Park51, Hohai University52
TL;DR: The second version of the living review paper dedicated to the decadal methane budget, integrating results of top-down studies (atmospheric observations within an atmospheric inverse-modeling framework) and bottom-up estimates (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations) as discussed by the authors.
Abstract: Understanding and quantifying the global methane (CH4) budget is important for assessing realistic pathways to mitigate climate change. Atmospheric emissions and concentrations of CH4 continue to increase, making CH4 the second most important human-influenced greenhouse gas in terms of climate forcing, after carbon dioxide (CO2). The relative importance of CH4 compared to CO2 depends on its shorter atmospheric lifetime, stronger warming potential, and variations in atmospheric growth rate over the past decade, the causes of which are still debated. Two major challenges in reducing uncertainties in the atmospheric growth rate arise from the variety of geographically overlapping CH4 sources and from the destruction of CH4 by short-lived hydroxyl radicals (OH). To address these challenges, we have established a consortium of multidisciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate new research aimed at improving and regularly updating the global methane budget. Following Saunois et al. (2016), we present here the second version of the living review paper dedicated to the decadal methane budget, integrating results of top-down studies (atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up estimates (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations). For the 2008–2017 decade, global methane emissions are estimated by atmospheric inversions (a top-down approach) to be 576 Tg CH4 yr−1 (range 550–594, corresponding to the minimum and maximum estimates of the model ensemble). Of this total, 359 Tg CH4 yr−1 or ∼ 60 % is attributed to anthropogenic sources, that is emissions caused by direct human activity (i.e. anthropogenic emissions; range 336–376 Tg CH4 yr−1 or 50 %–65 %). The mean annual total emission for the new decade (2008–2017) is 29 Tg CH4 yr−1 larger than our estimate for the previous decade (2000–2009), and 24 Tg CH4 yr−1 larger than the one reported in the previous budget for 2003–2012 (Saunois et al., 2016). Since 2012, global CH4 emissions have been tracking the warmest scenarios assessed by the Intergovernmental Panel on Climate Change. Bottom-up methods suggest almost 30 % larger global emissions (737 Tg CH4 yr−1, range 594–881) than top-down inversion methods. Indeed, bottom-up estimates for natural sources such as natural wetlands, other inland water systems, and geological sources are higher than top-down estimates. The atmospheric constraints on the top-down budget suggest that at least some of these bottom-up emissions are overestimated. The latitudinal distribution of atmospheric observation-based emissions indicates a predominance of tropical emissions (∼ 65 % of the global budget, < 30∘ N) compared to mid-latitudes (∼ 30 %, 30–60∘ N) and high northern latitudes (∼ 4 %, 60–90∘ N). The most important source of uncertainty in the methane budget is attributable to natural emissions, especially those from wetlands and other inland waters. Some of our global source estimates are smaller than those in previously published budgets (Saunois et al., 2016; Kirschke et al., 2013). In particular wetland emissions are about 35 Tg CH4 yr−1 lower due to improved partition wetlands and other inland waters. Emissions from geological sources and wild animals are also found to be smaller by 7 Tg CH4 yr−1 by 8 Tg CH4 yr−1, respectively. However, the overall discrepancy between bottom-up and top-down estimates has been reduced by only 5 % compared to Saunois et al. (2016), due to a higher estimate of emissions from inland waters, highlighting the need for more detailed research on emissions factors. Priorities for improving the methane budget include (i) a global, high-resolution map of water-saturated soils and inundated areas emitting methane based on a robust classification of different types of emitting habitats; (ii) further development of process-based models for inland-water emissions; (iii) intensification of methane observations at local scales (e.g., FLUXNET-CH4 measurements) and urban-scale monitoring to constrain bottom-up land surface models, and at regional scales (surface networks and satellites) to constrain atmospheric inversions; (iv) improvements of transport models and the representation of photochemical sinks in top-down inversions; and (v) development of a 3D variational inversion system using isotopic and/or co-emitted species such as ethane to improve source partitioning.

1,047 citations


Proceedings ArticleDOI
Chao Peng1, Xiangyu Zhang, Gang Yu, Guiming Luo1, Jian Sun 
21 Jul 2017
TL;DR: This work proposes a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation and suggests a residual-based boundary refinement to further refine the object boundaries.
Abstract: One of recent trends [31, 32, 14] in network architecture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more efficient than a large kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2% (vs 80.2%) on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.

1,047 citations


Journal ArticleDOI
25 Mar 2016-Science
TL;DR: This work set out to define a minimal cellular genome experimentally by designing and building one, then testing it for viability, and applied whole-genome design and synthesis to the problem of minimizing a cellular genome.
Abstract: We used whole-genome design and complete chemical synthesis to minimize the 1079-kilobase pair synthetic genome of Mycoplasma mycoides JCVI-syn1.0. An initial design, based on collective knowledge of molecular biology combined with limited transposon mutagenesis data, failed to produce a viable cell. Improved transposon mutagenesis methods revealed a class of quasi-essential genes that are needed for robust growth, explaining the failure of our initial design. Three cycles of design, synthesis, and testing, with retention of quasi-essential genes, produced JCVI-syn3.0 (531 kilobase pairs, 473 genes), which has a genome smaller than that of any autonomously replicating cell found in nature. JCVI-syn3.0 retains almost all genes involved in the synthesis and processing of macromolecules. Unexpectedly, it also contains 149 genes with unknown biological functions. JCVI-syn3.0 is a versatile platform for investigating the core functions of life and for exploring whole-genome design.

1,047 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work develops Pretext-Invariant Representation Learning (PIRL), a new state-of-the-art in self-supervised learning from images that learns invariant representations based on pretext tasks that substantially improves the semantic quality of the learned image representations.
Abstract: The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as `pearl') that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised representations with good invariance properties.

Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper proposes a simple convolutional net architecture that can be used even when the amount of learning data is limited and shows that by learning representations through the use of deep-convolutional neural networks, a significant increase in performance can be obtained on these tasks.
Abstract: Automatic age and gender classification has become relevant to an increasing amount of applications, particularly since the rise of social platforms and social media. Nevertheless, performance of existing methods on real-world images is still significantly lacking, especially when compared to the tremendous leaps in performance recently reported for the related task of face recognition. In this paper we show that by learning representations through the use of deep-convolutional neural networks (CNN), a significant increase in performance can be obtained on these tasks. To this end, we propose a simple convolutional net architecture that can be used even when the amount of learning data is limited. We evaluate our method on the recent Adience benchmark for age and gender estimation and show it to dramatically outperform current state-of-the-art methods.

Journal ArticleDOI
01 Mar 2015-Gut
TL;DR: These first global estimates of oesophageal cancer incidence by histology suggested a high concentration of AC in high-income countries with men being at much greater risk.
Abstract: Objective The two major histological types of oesophageal cancer—adenocarcinoma (AC) and squamous cell carcinoma (SCC)—are known to differ greatly in terms of risk factors and epidemiology. To date, global incidence estimates for individual subtypes are still lacking. This study for the first time quantified the global burden of oesophageal cancer by histological subtype. Design Where available, data from Cancer Incidence in Five Continents Vol. X (CI5X) were used to compute, age-specific, sex-specific and country-specific proportions of AC and SCC. Nine regional averages were computed for countries without CI5X data. The proportions were then applied to all oesophageal cancer cases from GLOBOCAN 2012 and age-standardised incidence rates calculated for both histological types. Results Worldwide, an estimated 398 000 SCCs and 52 000 ACs of the oesophagus occurred in 2012, translating to incidence rates of 5.2 and 0.7 per 100 000, respectively. Although SCCs were most common in South-Eastern and Central Asia (79% of the total global SCC cases), the highest burden of AC was found in Northern and Western Europe, Northern America and Oceania (46% of the total global AC cases). Men had substantially higher incidence than women, especially in the case of AC (male to female ratio AC: 4.4; SCC: 2.7). Conclusions These first global estimates of oesophageal cancer incidence by histology suggested a high concentration of AC in high-income countries with men being at much greater risk. This quantification of incidence will aid health policy makers to plan appropriate cancer control measures in the future.

Journal ArticleDOI
Ron Adner1
TL;DR: In the past 20 years, the term "ecosystem" has become pervasive in discussions of strategy, both scholarly and applied as mentioned in this paper, and its rise has mirrored an increasing interest and concern among both researc...

Journal ArticleDOI
TL;DR: Five patients who had Guillain–Barre syndrome 5 to 10 days after the onset of Covid-19 are described and three had severe weakness and an axonal pattern on electr...
Abstract: Guillain–Barre Syndrome with Covid-19 Five patients who had Guillain–Barre syndrome 5 to 10 days after the onset of Covid-19 are described. Three had severe weakness and an axonal pattern on electr...

Journal ArticleDOI
TL;DR: AJCC's 8th edition of the Staging manual, Head and Neck Section, introduced significant modifications from the prior 7th edition as discussed by the authors, including the reorganization of skin cancer (other than melanoma and Merkel cell carcinoma) from a general chapter for the entire body to a head and neck-specific cutaneous malignancies chapter; division of cancer of the pharynx into 3 separate chapters; changes to the tumor (T) categories for oral cavity, skin, and nasopharynx; and the addition of extranodal cancer extension to lymph
Abstract: Answer questions and earn CME/CNE The recently released eighth edition of the American Joint Committee on Cancer (AJCC) Staging Manual, Head and Neck Section, introduces significant modifications from the prior seventh edition. This article details several of the most significant modifications, and the rationale for the revisions, to alert the reader to evolution of the field. The most significant update creates a separate staging algorithm for high-risk human papillomavirus-associated cancer of the oropharynx, distinguishing it from oropharyngeal cancer with other causes. Other modifications include: the reorganizing of skin cancer (other than melanoma and Merkel cell carcinoma) from a general chapter for the entire body to a head and neck-specific cutaneous malignancies chapter; division of cancer of the pharynx into 3 separate chapters; changes to the tumor (T) categories for oral cavity, skin, and nasopharynx; and the addition of extranodal cancer extension to lymph node category (N) in all but the viral-related cancers and mucosal melanoma. The Head and Neck Task Force worked with colleagues around the world to derive a staging system that reflects ongoing changes in head and neck oncology; it remains user friendly and consistent with the traditional tumor, lymph node, metastasis (TNM) staging paradigm. CA Cancer J Clin 2017;67:122-137. © 2017 American Cancer Society.

Journal ArticleDOI
TL;DR: Results of simulations show that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes.
Abstract: Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

Journal ArticleDOI
TL;DR: Some recent additions to Clustal Omega are described and some alternative ways of making alignments are benchmarked based on protein structure comparisons or predictions and include a recently described method based on secondary structure prediction.
Abstract: Clustal Omega is a widely used package for carrying out multiple sequence alignment. Here, we describe some recent additions to the package and benchmark some alternative ways of making alignments. These benchmarks are based on protein structure comparisons or predictions and include a recently described method based on secondary structure prediction. In general, Clustal Omega is fast enough to make very large alignments and the accuracy of protein alignments is high when compared to alternative packages. The package is freely available as executables or source code from www.clustal.org or can be run on-line from a variety of sites, especially the EBI www.ebi.ac.uk.

Journal ArticleDOI
TL;DR: It is the current feeling of the authors that, in view of the widely diverse beneficial functions that have been reported for melatonin, these may be merely epiphenomena of the more fundamental, yet‐to‐be identified basic action(s) of this ancient molecule.
Abstract: Melatonin is uncommonly effective in reducing oxidative stress under a remarkably large number of circumstances. It achieves this action via a variety of means: direct detoxification of reactive oxygen and reactive nitrogen species and indirectly by stimulating antioxidant enzymes while suppressing the activity of pro-oxidant enzymes. In addition to these well-described actions, melatonin also reportedly chelates transition metals, which are involved in the Fenton/Haber-Weiss reactions; in doing so, melatonin reduces the formation of the devastatingly toxic hydroxyl radical resulting in the reduction of oxidative stress. Melatonin's ubiquitous but unequal intracellular distribution, including its high concentrations in mitochondria, likely aid in its capacity to resist oxidative stress and cellular apoptosis. There is credible evidence to suggest that melatonin should be classified as a mitochondria-targeted antioxidant. Melatonin's capacity to prevent oxidative damage and the associated physiological debilitation is well documented in numerous experimental ischemia/reperfusion (hypoxia/reoxygenation) studies especially in the brain (stroke) and in the heart (heart attack). Melatonin, via its antiradical mechanisms, also reduces the toxicity of noxious prescription drugs and of methamphetamine, a drug of abuse. Experimental findings also indicate that melatonin renders treatment-resistant cancers sensitive to various therapeutic agents and may be useful, due to its multiple antioxidant actions, in especially delaying and perhaps treating a variety of age-related diseases and dehumanizing conditions. Melatonin has been effectively used to combat oxidative stress, inflammation and cellular apoptosis and to restore tissue function in a number of human trials; its efficacy supports its more extensive use in a wider variety of human studies. The uncommonly high-safety profile of melatonin also bolsters this conclusion. It is the current feeling of the authors that, in view of the widely diverse beneficial functions that have been reported for melatonin, these may be merely epiphenomena of the more fundamental, yet-to-be identified basic action(s) of this ancient molecule.

Journal ArticleDOI
TL;DR: The main goal of this study is to holistically analyze the security threats, challenges, and mechanisms inherent in all edge paradigms, while highlighting potential synergies and venues of collaboration.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the theory of dipole moments in crystalline insulators to higher multipole moments, and describe the topological invariants that protect these moments.
Abstract: We extend the theory of dipole moments in crystalline insulators to higher multipole moments. In this paper, we expand in great detail the theory presented in Ref. 1, and extend it to cover associated topological pumping phenomena, and a novel class of 3D insulator with chiral hinge states. In quantum-mechanical crystalline insulators, higher multipole bulk moments manifest themselves by the presence of boundary-localized moments of lower dimension, in exact correspondence with the electromagnetic theory of classical continuous dielectrics. In the presence of certain symmetries, these moments are quantized, and their boundary signatures are fractionalized. These multipole moments then correspond to new SPT phases. The topological structure of these phases is described by "nested" Wilson loops, which reflect the bulk-boundary correspondence in a way that makes evident a hierarchical classification of the multipole moments. Just as a varying dipole generates charge pumping, a varying quadrupole generates dipole pumping, and a varying octupole generates quadrupole pumping. For non-trivial adiabatic cycles, the transport of these moments is quantized. An analysis of these interconnected phenomena leads to the conclusion that a new kind of Chern-type insulator exists, which has chiral, hinge-localized modes in 3D. We provide the minimal models for the quantized multipole moments, the non-trivial pumping processes and the hinge Chern insulator, and describe the topological invariants that protect them.

Journal ArticleDOI
11 Aug 2020-BMJ
TL;DR: The patient who has a delayed recovery from an episode of covid-19 that was managed in the community or in a standard hospital ward is referred to, which can be divided into those who may have serious sequelae and those with a non-specific clinical picture, often dominated by fatigue and breathlessness.
Abstract: ### What you need to know Post-acute covid-19 (“long covid”) seems to be a multisystem disease, sometimes occurring after a relatively mild acute illness.1 Clinical management requires a whole-patient perspective.2 This article, intended for primary care clinicians, relates to the patient who has a delayed recovery from an episode of covid-19 that was managed in the community or in a standard hospital ward. Broadly, such patients can be divided into those who may have serious sequelae (such as thromboembolic complications) and those with a non-specific clinical picture, often dominated by fatigue and breathlessness. The specialist rehabilitation needs of a third group, covid-19 patients whose acute illness required intensive care, have been covered elsewhere.3 In the absence of agreed definitions, for the purposes of this article we define post-acute covid-19 as extending beyond three weeks from the onset of first symptoms and chronic covid-19 as extending beyond 12 weeks. Since many people were not tested, and false negative tests are common,4 we suggest that a positive test for covid-19 is not a prerequisite for diagnosis. ### How common is it? Around 10% of patients who have tested positive for SARS-CoV-2 virus remain unwell beyond three weeks, and a smaller proportion for months (see box 1).7 This is based on the UK COVID Symptom Study, in which people enter their ongoing symptoms on a smartphone app. This percentage is lower than that cited in many published observational …

Journal ArticleDOI
TL;DR: al. as discussed by the authors introduced the R package rptR for the estimation of ICC and R for Gaussian, binomial and Poisson-distributed data, which allows the quantification of coefficients of determination R2 as well as of raw variance components.
Abstract: Summary Intra-class correlations (ICC) and repeatabilities (R) are fundamental statistics for quantifying the reproducibility of measurements and for understanding the structure of biological variation. Linear mixed effects models offer a versatile framework for estimating ICC and R. However, while point estimation and significance testing by likelihood ratio tests is straightforward, the quantification of uncertainty is not as easily achieved. A further complication arises when the analysis is conducted on data with non-Gaussian distributions because the separation of the mean and the variance is less clear-cut for non-Gaussian than for Gaussian models. Nonetheless, there are solutions to approximate repeatability for the most widely used families of generalized linear mixed models (GLMMs). Here, we introduce the R package rptR for the estimation of ICC and R for Gaussian, binomial and Poisson-distributed data. Uncertainty in estimators is quantified by parametric bootstrapping and significance testing is implemented by likelihood ratio tests and through permutation of residuals. The package allows control for fixed effects and thus the estimation of adjusted repeatabilities (that remove fixed effect variance from the estimate) and enhanced agreement repeatabilities (that add fixed effect variance to the denominator). Furthermore, repeatability can be estimated from random-slope models. The package features convenient summary and plotting functions. Besides repeatabilities, the package also allows the quantification of coefficients of determination R2 as well as of raw variance components. We present an example analysis to demonstrate the core features and discuss some of the limitations of rptR.

Journal ArticleDOI
TL;DR: In this paper, the authors provide evidence on the transmission of monetary policy shocks in a setting with both economic and financial variables, and show that shocks identified using high frequency surprises around policy announcements as external instruments produce responses in output and inflation that are typical in monetary VAR analysis.
Abstract: We provide evidence on the transmission of monetary policy shocks in a setting with both economic and financial variables. We first show that shocks identified using high frequency surprises around policy announcements as external instruments produce responses in output and inflation that are typical in monetary VAR analysis. We also find, however, that the resulting "modest" movements in short rates lead to "large" movements in credit costs, which are due mainly to the reaction of both term premia and credit spreads. Finally, we show that forward guidance is important to the overall strength of policy transmission. (JEL E31, E32, E43, E44, E52, G01)

Journal ArticleDOI
TL;DR: The most common non-communicable diseases, including ischaemic heart disease, stroke, chronic obstructive pulmonary disease, and cancers (liver, stomach, and lung), contributed much more to YLLs in 2013 compared with 1990, and road injuries have become a top ten cause of death in all provinces in mainland China.

Proceedings ArticleDOI
20 May 2019
TL;DR: In this paper, a federated learning (FL) protocol for heterogeneous clients in a mobile edge computing (MEC) network is proposed. But the authors consider the problem of data aggregation in the overall training process and propose a new protocol to solve it.
Abstract: We envision a mobile edge computing (MEC) framework for machine learning (ML) technologies, which leverages distributed client data and computation resources for training high-performance ML models while preserving client privacy. Toward this future goal, this work aims to extend Federated Learning (FL), a decentralized learning framework that enables privacy-preserving training of models, to work with heterogeneous clients in a practical cellular network. The FL protocol iteratively asks random clients to download a trainable model from a server, update it with own data, and upload the updated model to the server, while asking the server to aggregate multiple client updates to further improve the model. While clients in this protocol are free from disclosing own private data, the overall training process can become inefficient when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions. Specifically, FedCS solves a client selection problem with resource constraints, which allows the server to aggregate as many client updates as possible and to accelerate performance improvement in ML models. We conducted an experimental evaluation using publicly-available large-scale image datasets to train deep neural networks on MEC environment simulations. The experimental results show that FedCS is able to complete its training process in a significantly shorter time compared to the original FL protocol.

Journal ArticleDOI
TL;DR: In this paper, the persistence of high-growth entrepreneurship within regions is explained by a theoretical concept of "Entrepreneurial Ecosystems", which is a popular concept to explain the persistence and resilience of high growth within regions.
Abstract: Entrepreneurial ecosystems have emerged as a popular concept to explain the persistence of high–growth entrepreneurship within regions. However, as a theoretical concept ecosystems remain underdeve...

Journal ArticleDOI
TL;DR: An overview of female breast cancer statistics in the United States, including data on incidence, mortality, survival, and screening is provided, in view of the increasing trends in breast cancer incidence rates in black women.
Abstract: In this article, the American Cancer Society provides an overview of female breast cancer statistics in the United States, including data on incidence, mortality, survival, and screening. Approximately 231,840 new cases of invasive breast cancer and 40,290 breast cancer deaths are expected to occur among US women in 2015. Breast cancer incidence rates increased among non-Hispanic black (black) and Asian/Pacific Islander women and were stable among non-Hispanic white (white), Hispanic, and American Indian/Alaska Native women from 2008 to 2012. Although white women have historically had higher incidence rates than black women, in 2012, the rates converged. Notably, during 2008 through 2012, incidence rates were significantly higher in black women compared with white women in 7 states, primarily located in the South. From 1989 to 2012, breast cancer death rates decreased by 36%, which translates to 249,000 breast cancer deaths averted in the United States over this period. This decrease in death rates was evident in all racial/ethnic groups except American Indians/Alaska Natives. However, the mortality disparity between black and white women nationwide has continued to widen; and, by 2012, death rates were 42% higher in black women than in white women. During 2003 through 2012, breast cancer death rates declined for white women in all 50 states; but, for black women, declines occurred in 27 of 30 states that had sufficient data to analyze trends. In 3 states (Mississippi, Oklahoma, and Wisconsin), breast cancer death rates in black women were stable during 2003 through 2012. Widening racial disparities in breast cancer mortality are likely to continue, at least in the short term, in view of the increasing trends in breast cancer incidence rates in black women.

Journal ArticleDOI
TL;DR: The thoroughly updated antiSMASH version 4 is presented, which adds several novel features, including prediction of gene cluster boundaries using the ClusterFinder method or the newly integrated CASSIS algorithm, improved substrate specificity prediction for non-ribosomal peptide synthetase adenylation domains based on the new SANDPUMA algorithm, and several usability features have been updated and improved.
Abstract: Many antibiotics, chemotherapeutics, crop protection agents and food preservatives originate from molecules produced by bacteria, fungi or plants. In recent years, genome mining methodologies have been widely adopted to identify and characterize the biosynthetic gene clusters encoding the production of such compounds. Since 2011, the â € antibiotics and secondary metabolite analysis shell - antiSMASH' has assisted researchers in efficiently performing this, both as a web server and a standalone tool. Here, we present the thoroughly updated antiSMASH version 4, which adds several novel features, including prediction of gene cluster boundaries using the ClusterFinder method or the newly integrated CASSIS algorithm, improved substrate specificity prediction for non-ribosomal peptide synthetase adenylation domains based on the new SANDPUMA algorithm, improved predictions for terpene and ribosomally synthesized and post-translationally modified peptides cluster products, reporting of sequence similarity to proteins encoded in experimentally characterized gene clusters on a per-protein basis and a domain-level alignment tool for comparative analysis of trans-AT polyketide synthase assembly line architectures. Additionally, several usability features have been updated and improved. Together, these improvements make antiSMASH up-to-date with the latest developments in natural product research and will further facilitate computational genome mining for the discovery of novel bioactive molecules.

Journal ArticleDOI
28 Apr 2017-Science
TL;DR: The design and demonstration of a device based on a porous metal-organic framework that captures water from the atmosphere at ambient conditions by using low-grade heat from natural sunlight at a flux of less than 1 sun (1 kilowatt per square meter).
Abstract: Atmospheric water is a resource equivalent to ~10% of all fresh water in lakes on Earth. However, an efficient process for capturing and delivering water from air, especially at low humidity levels (down to 20%), has not been developed. We report the design and demonstration of a device based on a porous metal-organic framework {MOF-801, [Zr6O4(OH)4(fumarate)6]} that captures water from the atmosphere at ambient conditions by using low-grade heat from natural sunlight at a flux of less than 1 sun (1 kilowatt per square meter). This device is capable of harvesting 2.8 liters of water per kilogram of MOF daily at relative humidity levels as low as 20% and requires no additional input of energy.