scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this paper, the authors identify priorities for research in this area: (1) develop model host-microbiome systems for crop plants and non-crop plants with associated microbial culture collections and reference genomes, (2) define core microbiomes and metagenomes in these model systems, (3) elucidate the rules of synthetic, functionally programmable microbiome assembly, and (4) determine functional mechanisms of plant microbiome interactions.
Abstract: Feeding a growing world population amidst climate change requires optimizing the reliability, resource use, and environmental impacts of food production. One way to assist in achieving these goals is to integrate beneficial plant microbiomes-i.e., those enhancing plant growth, nutrient use efficiency, abiotic stress tolerance, and disease resistance-into agricultural production. This integration will require a large-scale effort among academic researchers, industry researchers, and farmers to understand and manage plant-microbiome interactions in the context of modern agricultural systems. Here, we identify priorities for research in this area: (1) develop model host-microbiome systems for crop plants and non-crop plants with associated microbial culture collections and reference genomes, (2) define core microbiomes and metagenomes in these model systems, (3) elucidate the rules of synthetic, functionally programmable microbiome assembly, (4) determine functional mechanisms of plant-microbiome interactions, and (5) characterize and refine plant genotype-by-environment-by-microbiome-by-management interactions. Meeting these goals should accelerate our ability to design and implement effective agricultural microbiome manipulations and management strategies, which, in turn, will pay dividends for both the consumers and producers of the world food supply.

547 citations


Journal ArticleDOI
TL;DR: Adding ovarian suppression to tamoxifen did not provide a significant benefit in the overall study population, but for women who were at sufficient risk for recurrence to warrant adjuvant chemotherapy and who remained premenopausal, the addition of ovarian suppression improved disease outcomes.
Abstract: We randomly assigned 3066 premenopausal women, stratified according to prior receipt or nonreceipt of chemotherapy, to receive 5 years of tamoxifen, tamoxifen plus ovarian suppression, or exemestane plus ovarian suppression. The primary analysis tested the hypothesis that tamoxifen plus ovarian suppression would improve disease-free survival, as compared with tamoxifen alone. In the primary analysis, 46.7% of the patients had not received chemotherapy previously, and 53.3% had received chemotherapy and remained premenopausal. RESULTS After a median follow-up of 67 months, the estimated disease-free survival rate at 5 years was 86.6% in the tamoxifen–ovarian suppression group and 84.7% in the tamoxifen group (hazard ratio for disease recurrence, second invasive cancer, or death, 0.83; 95% confidence interval [CI], 0.66 to 1.04; P = 0.10). Multivariable allowance for prognostic factors suggested a greater treatment effect with tamoxifen plus ovarian suppression than with tamoxifen alone (hazard ratio, 0.78; 95% CI, 0.62 to 0.98). Most recurrences occurred in patients who had received prior chemotherapy, among whom the rate of freedom from breast cancer at 5 years was 82.5% in the tamoxifen–ovarian suppression group and 78.0% in the tamoxifen group (hazard ratio for recurrence, 0.78; 95% CI, 0.60 to 1.02). At 5 years, the rate of freedom from breast cancer was 85.7% in the exemestane–ovarian suppression group (hazard ratio for recurrence vs. tamoxifen, 0.65; 95% CI, 0.49 to 0.87). CONCLUSIONS Adding ovarian suppression to tamoxifen did not provide a significant benefit in the overall study population. However, for women who were at sufficient risk for recurrence to warrant adjuvant chemotherapy and who remained premenopausal, the addition of ovarian suppression improved disease outcomes. Further improvement was seen with the use of exemestane plus ovarian suppression. (Funded by Pfizer and others; SOFT ClinicalTrials.gov number, NCT00066690.)

547 citations


Posted ContentDOI
04 Jan 2021-medRxiv
TL;DR: The SARS-CoV-2 lineage B.7, now designated Variant of Concern 202012/01 (VOC) by Public Health England, originated in the UK in late Summer to early Autumn 2020 as mentioned in this paper.
Abstract: The SARS-CoV-2 lineage B.1.1.7, now designated Variant of Concern 202012/01 (VOC) by Public Health England, originated in the UK in late Summer to early Autumn 2020. We examine epidemiological evidence for this VOC having a transmission advantage from several perspectives. First, whole genome sequence data collected from community-based diagnostic testing provides an indication of changing prevalence of different genetic variants through time. Phylodynamic modelling additionally indicates that genetic diversity of this lineage has changed in a manner consistent with exponential growth. Second, we find that changes in VOC frequency inferred from genetic data correspond closely to changes inferred by S-gene target failures (SGTF) in community-based diagnostic PCR testing. Third, we examine growth trends in SGTF and non-SGTF case numbers at local area level across England, and show that the VOC has higher transmissibility than non-VOC lineages, even if the VOC has a different latent period or generation time. Available SGTF data indicate a shift in the age composition of reported cases, with a larger share of under 20 year olds among reported VOC than non-VOC cases. Fourth, we assess the association of VOC frequency with independent estimates of the overall SARS-CoV-2 reproduction number through time. Finally, we fit a semi-mechanistic model directly to local VOC and non-VOC case incidence to estimate the reproduction numbers over time for each. There is a consensus among all analyses that the VOC has a substantial transmission advantage, with the estimated difference in reproduction numbers between VOC and non-VOC ranging between 0.4 and 0.7, and the ratio of reproduction numbers varying between 1.4 and 1.8. We note that these estimates of transmission advantage apply to a period where high levels of social distancing were in place in England; extrapolation to other transmission contexts therefore requires caution.

547 citations


Posted ContentDOI
25 Feb 2020-medRxiv
TL;DR: In this paper, the neurological manifestations of patients with coronavirus disease 2019 (COVID-19) were studied in three categories: central nervous system (CNS) symptoms or diseases (headache, dizziness, impaired consciousness, ataxia, acute cerebrovascular disease, and epilepsy), peripheral nervous system symptoms (hypogeusia, hyposmia, hypopsia, and neuralgia), and skeletal muscular symptoms.
Abstract: OBJECTIVE To study the neurological manifestations of patients with coronavirus disease 2019 (COVID-19). DESIGN Retrospective case series SETTING Three designated COVID-19 care hospitals of the Union Hospital of Huazhong University of Science and Technology in Wuhan, China. PARTICIPANTS Two hundred fourteen hospitalized patients with laboratory confirmed diagnosis of severe acute respiratory syndrome from coronavirus 2 (SARS-CoV-2) infection. Data were collected from 16 January 2020 to 19 February 2020. MAIN OUTCOME MEASURES Clinical data were extracted from electronic medical records and reviewed by a trained team of physicians. Neurological symptoms fall into three categories: central nervous system (CNS) symptoms or diseases (headache, dizziness, impaired consciousness, ataxia, acute cerebrovascular disease, and epilepsy), peripheral nervous system (PNS) symptoms (hypogeusia, hyposmia, hypopsia, and neuralgia), and skeletal muscular symptoms. Data of all neurological symptoms were checked by two trained neurologists. RESULTS Of 214 patients studied, 88 (41.1%) were severe and 126 (58.9%) were non-severe patients. Compared with non-severe patients, severe patients were older (58.7 ± 15.0 years vs 48.9 ± 14.7 years), had more underlying disorders (42 [47.7%] vs 41 [32.5%]), especially hypertension (32 [36.4%] vs 19 [15.1%]), and showed less typical symptoms such as fever (40 [45.5%] vs 92 [73%]) and cough (30 [34.1%] vs 77 [61.1%]). Seventy-eight (36.4%) patients had neurologic manifestations. More severe patients were likely to have neurologic symptoms (40 [45.5%] vs 38 [30.2%]), such as acute cerebrovascular diseases (5 [5.7%] vs 1 [0.8%]), impaired consciousness (13 [14.8%] vs 3 [2.4%]) and skeletal muscle injury (17 [19.3%] vs 6 [4.8%]). CONCLUSION Compared with non-severe patients with COVID-19, severe patients commonly had neurologic symptoms manifested as acute cerebrovascular diseases, consciousness impairment and skeletal muscle symptoms.

547 citations


Journal ArticleDOI
TL;DR: High-performance planar heterojunction perovskite solar cells constructed on highly flexible and ultrathin silver-mesh/conducting polymer substrates are demonstrated, demonstrating excellent robustness against mechanical deformation and promising for future applications in flexible and bendable solar cells.
Abstract: Wide applications of personal consumer electronics have triggered tremendous need for portable power sources featuring light-weight and mechanical flexibility. Perovskite solar cells offer a compelling combination of low-cost and high device performance. Here we demonstrate high-performance planar heterojunction perovskite solar cells constructed on highly flexible and ultrathin silver-mesh/conducting polymer substrates. The device performance is comparable to that of their counterparts on rigid glass/indium tin oxide substrates, reaching a power conversion efficiency of 14.0%, while the specific power (the ratio of power to device weight) reaches 1.96 kW kg(-1), given the fact that the device is constructed on a 57-μm-thick polyethylene terephthalate based substrate. The flexible device also demonstrates excellent robustness against mechanical deformation, retaining >95% of its original efficiency after 5,000 times fully bending. Our results confirmed that perovskite thin films are fully compatible with our flexible substrates, and are thus promising for future applications in flexible and bendable solar cells.

547 citations


Journal ArticleDOI
TL;DR: This paper proposed a four-phase interview protocol refinement (IPR) framework to improve the quality of qualitative interviews by ensuring interview questions align with the research questions, organizing an interview protocol to create an inquiry-based conversation, having the protocol reviewed by others, and piloting it.
Abstract: Interviews provide researchers with rich and detailed qualitative data for understanding participants' experiences, how they describe those experiences, and the meaning they make of those experiences (Rubin & Rubin, 2012). Given the centrality of interviews for qualitative research, books and articles on conducting research interviews abound. These existing resources typically focus on: the conditions fostering quality interviews, such as gaining access to and selecting participants (Rubin & Rubin, 2012; Seidman, 2013; Weiss, 1994); building trust (Rubin & Rubin, 2012); the location and length of time of the interview (Weiss, 1994); the order, quality, and clarity of questions (Patton, 2015; Rubin & Rubin, 2012); and the overall process of conducting an interview (Brinkmann & Kvale, 2015; Patton, 2015). Existing resources on conducting research interviews individually offer valuable guidance but do not come together to offer a systematic framework for developing and refining interview protocols. In this article, I present the interview protocol refinement (IPR) framework--a four-phase process to develop and fine-tune interview protocols. IPR's four-phases include ensuring interview questions align with the study's research questions, organizing an interview protocol to create an inquiry-based conversation, having the protocol reviewed by others, and piloting it. Qualitative researchers can strengthen the reliability of their interview protocols as instruments by refining them through the IPR framework presented here. By enhancing the reliability of interview protocols, researchers can increase the quality of data they obtain from research interviews. Furthermore, the IPR framework can provide qualitative researchers with a shared language for indicating the rigorous steps taken to develop interview protocols and ensure their congruency with the study at hand (Jones, Torres, & Arminio, 2014). IPR framework is most suitable for refining structured or semi-structured interviews. The IPR framework, however, may also support development of non-structured interview guides, which have topics for discussions or a small set of broad questions to facilitate the conversation. For instance, from a grounded theory perspective, piloting interview protocols/guides are unnecessary because each interview is designed to build from information learned in prior interviews (Corbin & Strauss, 2015). Yet, given the important role the first interview plays in setting the foundation for all the interviews that follow, having an initial interview protocol vetted through the recursive process I outline here may strengthen the quality of data obtained throughout the entire study. As such, I frame the IPR framework as a viable approach to developing a strong initial interview protocol so the researcher is likely to elicit rich, focused, meaningful data that captures, to the extent possible, the experiences of participants. The Four-Phase Process to Interview Protocol Refinement (IPR) The interview protocol framework is comprised of four-phases: Phase 1: Ensuring interview questions align with research questions, Phase 2: Constructing an inquiry-based conversation, Phase 3: Receiving feedback on interview protocols Phase 4: Piloting the interview protocol. Each phase helps the researcher take one step further toward developing a research instrument appropriate for their participants and congruent with the aims of the research (Jones et al., 2014). Congruency means the researchers' interviews are anchored in the purpose of the study and the research questions. Combined, these four phases offer a systematic framework for developing a well-vetted interview protocol that can help a researcher obtain robust and detailed interview data necessary to address research questions. Phase 1: Ensuring Interview Questions Align With Research Questions The first phase focuses on the alignment between interview questions and research questions. …

546 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the recent progress in the study of topological nodal line semimetals in 3D and discuss different scenarios that when the protecting symmetry is broken, how a topologically topologically protected semimetal becomes Weyl, Dirac, and other topological phases, and discuss the possible physical effects accessible to experimental probes in these materials.
Abstract: We review the recent, mainly theoretical, progress in the study of topological nodal line semimetals in three dimensions. In these semimetals, the conduction and the valence bands cross each other along a one-dimensional curve in the three-dimensional Brillouin zone, and any perturbation that preserves a certain symmetry group (generated by either spatial symmetries or time-reversal symmetry) cannot remove this crossing line and open a full direct gap between the two bands. The nodal line(s) is hence topologically protected by the symmetry group, and can be associated with a topological invariant. In this review, (i) we enumerate the symmetry groups that may protect a topological nodal line; (ii) we write down the explicit form of the topological invariant for each of these symmetry groups in terms of the wave functions on the Fermi surface, establishing a topological classification; (iii) for certain classes, we review the proposals for the realization of these semimetals in real materials; (iv) we discuss different scenarios that when the protecting symmetry is broken, how a topological nodal line semimetal becomes Weyl semimetals, Dirac semimetals, and other topological phases; and (v) we discuss the possible physical effects accessible to experimental probes in these materials.

546 citations


Posted Content
TL;DR: It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Abstract: Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.

546 citations


Posted Content
TL;DR: This paper proposes a fine discretization of the 3D space around the subject and trains a ConvNet to predict per voxel likelihoods for each joint, which creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates.
Abstract: This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30% on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.

546 citations


Journal ArticleDOI
TL;DR: In this article, a provider-facing registry-based study collected cases of cutaneous manifestations after COVID-19 vaccination and found that delayed large local reactions were most common, followed by local injection site reactions, urticarial eruptions, and morbilliform eruptions.
Abstract: Background Cutaneous reactions after messenger RNA (mRNA)-based COVID-19 vaccines have been reported but are not well characterized. Objective To evaluate the morphology and timing of cutaneous reactions after mRNA COVID-19 vaccines. Methods A provider-facing registry-based study collected cases of cutaneous manifestations after COVID-19 vaccination. Results From December 2020 to February 2021, we recorded 414 cutaneous reactions to mRNA COVID-19 vaccines from Moderna (83%) and Pfizer (17%). Delayed large local reactions were most common, followed by local injection site reactions, urticarial eruptions, and morbilliform eruptions. Forty-three percent of patients with first-dose reactions experienced second-dose recurrence. Additional less common reactions included pernio/chilblains, cosmetic filler reactions, zoster, herpes simplex flares, and pityriasis rosea-like reactions. Limitations Registry analysis does not measure incidence. Morphologic misclassification is possible. Conclusions We report a spectrum of cutaneous reactions after mRNA COVID-19 vaccines. We observed some dermatologic reactions to Moderna and Pfizer vaccines that mimicked SARS-CoV-2 infection itself, such as pernio/chilblains. Most patients with first-dose reactions did not have a second-dose reaction and serious adverse events did not develop in any of the patients in the registry after the first or second dose. Our data support that cutaneous reactions to COVID-19 vaccination are generally minor and self-limited, and should not discourage vaccination.

546 citations


Proceedings Article
12 Feb 2016
TL;DR: This work develops two versions of the Constrained Laplacian Rank (CLR) method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives and derives optimization algorithms to solve them.
Abstract: Graph-based clustering methods perform clustering on a fixed input data graph. If this initial construction is of low quality then the resulting clustering may also be of low quality. Moreover, existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators. We address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure. In particular, our Constrained Laplacian Rank (CLR) method learns a graph with exactly k connected components (where k is the number of clusters). We develop two versions of this method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method.

Posted Content
TL;DR: In this article, the authors investigated the research development, current trends and intellectual structure of topic modeling based on Latent Dirichlet Allocation (LDA), and summarized challenges and introduced famous tools and datasets in topic modelling based on LDA.
Abstract: Topic modeling is one of the most powerful techniques in text mining for data mining, latent data discovery, and finding relationships among data, text documents. Researchers have published many articles in the field of topic modeling and applied in various fields such as software engineering, political science, medical and linguistic science, etc. There are various methods for topic modeling, which Latent Dirichlet allocation (LDA) is one of the most popular methods in this field. Researchers have proposed various models based on the LDA in topic modeling. According to previous work, this paper can be very useful and valuable for introducing LDA approaches in topic modeling. In this paper, we investigated scholarly articles highly (between 2003 to 2016) related to Topic Modeling based on LDA to discover the research development, current trends and intellectual structure of topic modeling. Also, we summarize challenges and introduce famous tools and datasets in topic modeling based on LDA.

Journal ArticleDOI
TL;DR: A detailed analysis of the size profiles of plasma DNA in 90 patients with hepatocellular carcinoma, 67 with chronic hepatitis B, 36 with hepatitis B-associated cirrhosis, and 32 healthy controls using massively parallel sequencing to achieve plasma DNA size measurement at single-base resolution and in a genome-wide manner improved understanding of thesize profile of tumor-derived circulating cell-free DNA.
Abstract: The analysis of tumor-derived circulating cell-free DNA opens up new possibilities for performing liquid biopsies for the assessment of solid tumors. Although its clinical potential has been increasingly recognized, many aspects of the biological characteristics of tumor-derived cell-free DNA remain unclear. With respect to the size profile of such plasma DNA molecules, a number of studies reported the finding of increased integrity of tumor-derived plasma DNA, whereas others found evidence to suggest that plasma DNA molecules released by tumors might be shorter. Here, we performed a detailed analysis of the size profiles of plasma DNA in 90 patients with hepatocellular carcinoma, 67 with chronic hepatitis B, 36 with hepatitis B-associated cirrhosis, and 32 healthy controls. We used massively parallel sequencing to achieve plasma DNA size measurement at single-base resolution and in a genome-wide manner. Tumor-derived plasma DNA molecules were further identified with the use of chromosome arm-level z-score analysis (CAZA), which facilitated the studying of their specific size profiles. We showed that populations of aberrantly short and long DNA molecules existed in the plasma of patients with hepatocellular carcinoma. The short ones preferentially carried the tumor-associated copy number aberrations. We further showed that there were elevated amounts of plasma mitochondrial DNA in the plasma of hepatocellular carcinoma patients. Such molecules were much shorter than the nuclear DNA in plasma. These results have improved our understanding of the size profile of tumor-derived circulating cell-free DNA and might further enhance our ability to use plasma DNA as a molecular diagnostic tool.

Book ChapterDOI
Tianwei Lin1, Xu Zhao1, Haisheng Su1, Chongjing Wang, Ming Yang1 
08 Sep 2018
TL;DR: An effective proposal generation method, named Boundary-Sensitive Network (BSN), which adopts "local to global" fashion and significantly improves the state-of-the-art temporal action detection performance.
Abstract: Temporal action proposal generation is an important yet challenging problem, since temporal proposals with rich action content are indispensable for analysing real-world videos with long duration and high proportion irrelevant content. This problem requires methods not only generating proposals with precise temporal boundaries, but also retrieving proposals to cover truth action instances with high recall and high overlap using relatively fewer proposals. To address these difficulties, we introduce an effective proposal generation method, named Boundary-Sensitive Network (BSN), which adopts “local to global” fashion. Locally, BSN first locates temporal boundaries with high probabilities, then directly combines these boundaries as proposals. Globally, with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating the confidence of whether a proposal contains an action within its region. We conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14, where BSN outperforms other state-of-the-art temporal action proposal generation methods with high recall and high temporal precision. Finally, further experiments demonstrate that by combining existing action classifiers, our method significantly improves the state-of-the-art temporal action detection performance.

Journal ArticleDOI
04 Nov 2020-BMJ
TL;DR: Even a four week delay of cancer treatment is associated with increased mortality across surgical, systemic treatment, and radiotherapy indications for seven cancers and policies focused on minimising system level delays to cancer treatment initiation could improve population level survival outcomes.
Abstract: Objective To quantify the association of cancer treatment delay and mortality for each four week increase in delay to inform cancer treatment pathways. Design Systematic review and meta-analysis. Data sources Published studies in Medline from 1 January 2000 to 10 April 2020. Eligibility criteria for selecting studies Curative, neoadjuvant, and adjuvant indications for surgery, systemic treatment, or radiotherapy for cancers of the bladder, breast, colon, rectum, lung, cervix, and head and neck were included. The main outcome measure was the hazard ratio for overall survival for each four week delay for each indication. Delay was measured from diagnosis to first treatment, or from the completion of one treatment to the start of the next. The primary analysis only included high validity studies controlling for major prognostic factors. Hazard ratios were assumed to be log linear in relation to overall survival and were converted to an effect for each four week delay. Pooled effects were estimated using DerSimonian and Laird random effect models. Results The review included 34 studies for 17 indications (n=1 272 681 patients). No high validity data were found for five of the radiotherapy indications or for cervical cancer surgery. The association between delay and increased mortality was significant (P Conclusions Cancer treatment delay is a problem in health systems worldwide. The impact of delay on mortality can now be quantified for prioritisation and modelling. Even a four week delay of cancer treatment is associated with increased mortality across surgical, systemic treatment, and radiotherapy indications for seven cancers. Policies focused on minimising system level delays to cancer treatment initiation could improve population level survival outcomes.

Journal ArticleDOI
TL;DR: Without bells and whistles, FoveaBox achieves state-of-the-art single model performance on the standard COCO and Pascal VOC object detection benchmark and avoids all computation and hyper-parameters related to anchor boxes, which are often sensitive to the final detection performance.
Abstract: We present FoveaBox, an accurate, flexible, and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations. In FoveaBox, an instance is assigned to adjacent feature levels to make the model more accurate.We demonstrate its effectiveness on standard benchmarks and report extensive experimental analysis. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance on the standard COCO and Pascal VOC object detection benchmark. More importantly, FoveaBox avoids all computation and hyper-parameters related to anchor boxes, which are often sensitive to the final detection performance. We believe the simple and effective approach will serve as a solid baseline and help ease future research for object detection. The code has been made publicly available at https://github.com/taokong/FoveaBox .


Journal ArticleDOI
17 Dec 2015
TL;DR: No effective medical interventions exist that completely reverse the disease other than lifestyle changes, dietary alterations and, possibly, bariatric surgery, however, several strategies that target pathophysiological processes such as an oversupply of fatty acids to the liver, cell injury and inflammation are currently under investigation.
Abstract: Nonalcoholic fatty liver disease (NAFLD) is a disorder characterized by excess accumulation of fat in hepatocytes (nonalcoholic fatty liver (NAFL)); in up to 40% of individuals, there are additional findings of portal and lobular inflammation and hepatocyte injury (which characterize nonalcoholic steatohepatitis (NASH)). A subset of patients will develop progressive fibrosis, which can progress to cirrhosis. Hepatocellular carcinoma and cardiovascular complications are life-threatening co-morbidities of both NAFL and NASH. NAFLD is closely associated with insulin resistance; obesity and metabolic syndrome are common underlying factors. As a consequence, the prevalence of NAFLD is estimated to be 10-40% in adults worldwide, and it is the most common liver disease in children and adolescents in developed countries. Mechanistic insights into fat accumulation, subsequent hepatocyte injury, the role of the immune system and fibrosis as well as the role of the gut microbiota are unfolding. Furthermore, genetic and epigenetic factors might explain the considerable interindividual variation in disease phenotype, severity and progression. To date, no effective medical interventions exist that completely reverse the disease other than lifestyle changes, dietary alterations and, possibly, bariatric surgery. However, several strategies that target pathophysiological processes such as an oversupply of fatty acids to the liver, cell injury and inflammation are currently under investigation. Diagnosis of NAFLD can be established by imaging, but detection of the lesions of NASH still depend on the gold-standard but invasive liver biopsy. Several non-invasive strategies are being evaluated to replace or complement biopsies, especially for follow-up monitoring.

Proceedings Article
17 Apr 2019
TL;DR: Li et al. as discussed by the authors proposed a graph pooling method based on self-attention to consider both node features and graph topology, which achieved superior graph classification performance on the benchmark datasets using a reasonable number of parameters.
Abstract: Advanced methods of applying deep learning to structured data such as graphs have been proposed in recent years. In particular, studies have focused on generalizing convolutional neural networks to graph data, which includes redefining the convolution and the downsampling (pooling) operations for graphs. The method of generalizing the convolution operation to graphs has been proven to improve performance and is widely used. However, the method of applying downsampling to graphs is still difficult to perform and has room for improvement. In this paper, we propose a graph pooling method based on self-attention. Self-attention using graph convolution allows our pooling method to consider both node features and graph topology. To ensure a fair comparison, the same training procedures and model architectures were used for the existing pooling methods and our method. The experimental results demonstrate that our method achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters.

Journal ArticleDOI
01 Feb 2021
TL;DR: In this paper, the authors reported persistent symptoms and a decline in health-related quality of life (HRQoL) after coronavirus disease 2019 (COVID-19) illness.
Abstract: Many individuals experience persistent symptoms and a decline in health-related quality of life (HRQoL) after coronavirus disease 2019 (COVID-19) illness.1 Existing studies have focused on hospitalized individuals 30 to 90 days after illness onset2,3,4 and have reported symptoms up to 110 days after illness.3 Longer-term sequelae in outpatients have not been well characterized.

Journal ArticleDOI
TL;DR: A metalearner, the X-learner, is proposed, which can adapt to structural properties, such as the smoothness and sparsity of the underlying treatment effect, and is shown to be easy to use and to produce results that are interpretable.
Abstract: There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms-such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks-to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.

Journal ArticleDOI
TL;DR: The mission of this scientific statement is to describe the epidemiology and pathogenesis of cardiorenal syndrome in the context of the continuously evolving nature of its clinicopathological description over the past decade.
Abstract: Cardiorenal syndrome encompasses a spectrum of disorders involving both the heart and kidneys in which acute or chronic dysfunction in 1 organ may induce acute or chronic dysfunction in the other organ. It represents the confluence of heart-kidney interactions across several interfaces. These include the hemodynamic cross-talk between the failing heart and the response of the kidneys and vice versa, as well as alterations in neurohormonal markers and inflammatory molecular signatures characteristic of its clinical phenotypes. The mission of this scientific statement is to describe the epidemiology and pathogenesis of cardiorenal syndrome in the context of the continuously evolving nature of its clinicopathological description over the past decade. It also describes diagnostic and therapeutic strategies applicable to cardiorenal syndrome, summarizes cardiac-kidney interactions in special populations such as patients with diabetes mellitus and kidney transplant recipients, and emphasizes the role of palliative care in patients with cardiorenal syndrome. Finally, it outlines the need for a cardiorenal education track that will guide future cardiorenal trials and integrate the clinical and research needs of this important field in the future.

Journal ArticleDOI
TL;DR: A new difference analog of the Caputo fractional derivative (called the L 2 - 1 σ formula) is constructed and some difference schemes generating approximations of the second and fourth order in space and the second order in time for the time fractional diffusion equation with variable coefficients are considered.

Journal ArticleDOI
TL;DR: A fundamental continuum model for TBG is reported which features not just the vanishing of the Fermi velocity, but also the perfect flattening of the entire lowest band.
Abstract: Twisted bilayer graphene (TBG) was recently shown to host superconductivity when tuned to special "magic angles" at which isolated and relatively flat bands appear. However, until now the origin of the magic angles and their irregular pattern have remained a mystery. Here we report on a fundamental continuum model for TBG which features not just the vanishing of the Fermi velocity, but also the perfect flattening of the entire lowest band. When parametrized in terms of α∼1/θ, the magic angles recur with a remarkable periodicity of Δα≃3/2. We show analytically that the exactly flat band wave functions can be constructed from the doubly periodic functions composed of ratios of theta functions-reminiscent of quantum Hall wave functions on the torus. We further report on the unusual robustness of the experimentally relevant first magic angle, address its properties analytically, and discuss how lattice relaxation effects help justify our model parameters.

Journal ArticleDOI
TL;DR: This colloquium highlights the importance of pattern formation and collective behavior for the promotion of cooperation under adverse conditions, as well as the synergies between network science and evolutionary game theory.
Abstract: Networks form the backbone of many complex systems, ranging from the Internet to human societies. Accordingly, not only is the range of our interactions limited and thus best described and modeled by networks, it is also a fact that the networks that are an integral part of such models are often interdependent or even interconnected. Networks of networks or multilayer networks are therefore a more apt description of social systems. This colloquium is devoted to evolutionary games on multilayer networks, and in particular to the evolution of cooperation as one of the main pillars of modern human societies. We first give an overview of the most significant conceptual differences between single-layer and multilayer networks, and we provide basic definitions and a classification of the most commonly used terms. Subsequently, we review fascinating and counterintuitive evolutionary outcomes that emerge due to different types of interdependencies between otherwise independent populations. The focus is on coupling through the utilities of players, through the flow of information, as well as through the popularity of different strategies on different network layers. The colloquium highlights the importance of pattern formation and collective behavior for the promotion of cooperation under adverse conditions, as well as the synergies between network science and evolutionary game theory.

Journal ArticleDOI
TL;DR: In this article, the authors developed an accurate, physically interpretable, and one-dimensional tolerance factor, τ, that correctly predicts 92% of compounds as perovskite or nonperovskiy for an experimental dataset of 576 ABX 3 materials.
Abstract: Predicting the stability of the perovskite structure remains a long-standing challenge for the discovery of new functional materials for many applications including photovoltaics and electrocatalysts. We developed an accurate, physically interpretable, and one-dimensional tolerance factor, τ, that correctly predicts 92% of compounds as perovskite or nonperovskite for an experimental dataset of 576 ABX 3 materials ( X = O 2− , F − , Cl − , Br − , I − ) using a novel data analytics approach based on SISSO (sure independence screening and sparsifying operator). τ is shown to generalize outside the training set for 1034 experimentally realized single and double perovskites (91% accuracy) and is applied to identify 23,314 new double perovskites ( A 2 BB′X 6 ) ranked by their probability of being stable as perovskite. This work guides experimentalists and theorists toward which perovskites are most likely to be successfully synthesized and demonstrates an approach to descriptor identification that can be extended to arbitrary applications beyond perovskite stability predictions.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this paper, the Encoder-Recurrent-Decoder (ERD) model is proposed for recognition and prediction of human body pose in videos and motion capture, which is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers.
Abstract: We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoiding drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units [31].

Journal ArticleDOI
03 Jun 2016
TL;DR: The Global Trade Analysis Project (GTAP) Data Base as discussed by the authors is a set of accounts measuring the value of annual flows of goods and services with regional and sectoral detail for the entire world economy.
Abstract: This paper provides an overview of the Global Trade Analysis Project (GTAP) Data Base and its latest release, version 9. The GTAP Data Base has been used in thousands of economy-wide analyses over the past twenty-five years. While initially focused on supporting trade policy analysis, the addition of satellite accounts pertaining to greenhouse gas emissions and land use has resulted in a surge of applications relating to climate change as well as other environmental issues. The Data Base comprises an exhaustive set of accounts measuring the value of annual flows of goods and services with regional and sectoral detail for the entire world economy. These flows include bilateral trade, transport, and protection matrices that link individual country/regional economic datasets. Version 9 disaggregates 140 regions, 57 sectors, 8 factors of production, for 3 base years (2004, 2007 and 2011). The great success enjoyed by this Data Base stems from the collaboration efforts by many parties interested in improving the quality of economic analysis of global policy issues related to trade, economic development, energy and the environment.

Journal ArticleDOI
Louis K. Scheffer1, C. Shan Xu1, Michał Januszewski2, Zhiyuan Lu1, Zhiyuan Lu3, Shin-ya Takemura1, Kenneth J. Hayworth1, Gary B. Huang1, Kazunori Shinomiya1, Jeremy Maitlin-Shepard2, Stuart Berg1, Jody Clements1, Philip M Hubbard1, William T. Katz1, Lowell Umayam1, Ting Zhao1, David G. Ackerman1, Tim Blakely2, John A. Bogovic1, Tom Dolafi1, Dagmar Kainmueller1, Takashi Kawase1, Khaled Khairy1, Laramie Leavitt2, Peter H. Li2, Larry Lindsey2, Nicole Neubarth1, Donald J. Olbris1, Hideo Otsuna1, Eric T. Trautman1, Masayoshi Ito1, Masayoshi Ito4, Alexander Shakeel Bates5, Jens Goldammer6, Jens Goldammer1, Tanya Wolff1, Robert Svirskas1, Philipp Schlegel5, Erika Neace1, Christopher J Knecht1, Chelsea X Alvarado1, Dennis A Bailey1, Samantha Ballinger1, Jolanta A. Borycz3, Brandon S Canino1, Natasha Cheatham1, Michael A Cook1, Marisa Dreher1, Octave Duclos1, Bryon Eubanks1, Kelli Fairbanks1, Samantha Finley1, Nora Forknall1, Audrey Francis1, Gary Patrick Hopkins1, Emily M Joyce1, SungJin Kim1, Nicole A Kirk1, Julie Kovalyak1, Shirley Lauchie1, Alanna Lohff1, Charli Maldonado1, Emily A Manley1, Sari McLin3, Caroline Mooney1, Miatta Ndama1, Omotara Ogundeyi1, Nneoma Okeoma1, Christopher Ordish1, Nicholas Padilla1, Christopher Patrick1, Tyler Paterson1, Elliott E Phillips1, Emily M Phillips1, Neha Rampally1, Caitlin Ribeiro1, Madelaine K Robertson3, Jon Thomson Rymer1, Sean M Ryan1, Megan Sammons1, Anne K Scott1, Ashley L Scott1, Aya Shinomiya1, Claire Smith1, Kelsey Smith1, Natalie L Smith1, Margaret A Sobeski1, Alia Suleiman1, Jackie Swift1, Satoko Takemura1, Iris Talebi1, Dorota Tarnogorska3, Emily Tenshaw1, Temour Tokhi1, John J. Walsh1, Tansy Yang1, Jane Anne Horne3, Feng Li1, Ruchi Parekh1, Patricia K. Rivlin1, Vivek Jayaraman1, Marta Costa7, Gregory S.X.E. Jefferis5, Gregory S.X.E. Jefferis7, Kei Ito4, Kei Ito6, Kei Ito1, Stephan Saalfeld1, Reed A. George1, Ian A. Meinertzhagen1, Ian A. Meinertzhagen3, Gerald M. Rubin1, Harald F. Hess1, Viren Jain2, Stephen M. Plaza1 
07 Sep 2020-eLife
TL;DR: Improved methods are summarized and the circuitry of a large fraction of the brain of the fruit fly Drosophila melanogaster is presented, reducing the effort needed to answer circuit questions and providing procedures linking the neurons defined by the analysis with genetic reagents.
Abstract: Animal brains of all sizes, from the smallest to the largest, work in broadly similar ways. Studying the brain of any one animal in depth can thus reveal the general principles behind the workings of all brains. The fruit fly Drosophila is a popular choice for such research. With about 100,000 neurons – compared to some 86 billion in humans – the fly brain is small enough to study at the level of individual cells. But it nevertheless supports a range of complex behaviors, including navigation, courtship and learning. Thanks to decades of research, scientists now have a good understanding of which parts of the fruit fly brain support particular behaviors. But exactly how they do this is often unclear. This is because previous studies showing the connections between cells only covered small areas of the brain. This is like trying to understand a novel when all you can see is a few isolated paragraphs. To solve this problem, Scheffer, Xu, Januszewski, Lu, Takemura, Hayworth, Huang, Shinomiya et al. prepared the first complete map of the entire central region of the fruit fly brain. The central brain consists of approximately 25,000 neurons and around 20 million connections. To prepare the map – or connectome – the brain was cut into very thin 8nm slices and photographed with an electron microscope. A three-dimensional map of the neurons and connections in the brain was then reconstructed from these images using machine learning algorithms. Finally, Scheffer et al. used the new connectome to obtain further insights into the circuits that support specific fruit fly behaviors. The central brain connectome is freely available online for anyone to access. When used in combination with existing methods, the map will make it easier to understand how the fly brain works, and how and why it can fail to work correctly. Many of these findings will likely apply to larger brains, including our own. In the long run, studying the fly connectome may therefore lead to a better understanding of the human brain and its disorders. Performing a similar analysis on the brain of a small mammal, by scaling up the methods here, will be a likely next step along this path.

Journal ArticleDOI
TL;DR: A class of second order approximations, called the weighted and shifted Grunwald difference (WSGD) operators, are proposed for Riemann-Liouville fractional derivatives, with their effective applications to numerically solving space fractional diffusion equations in one and two dimensions.
Abstract: A class of second order approximations, called the weighted and shifted Grunwald difference (WSGD) operators, are proposed for Riemann-Liouville fractional derivatives, with their effective applications to numerically solving space fractional diffusion equations in one and two dimensions. The stability and convergence of our difference schemes for space fractional diffusion equations with constant coefficients in one and two dimensions are theoretically established. Several numerical examples are implemented to test the efficiency of the numerical schemes and confirm the convergence order, and the numerical results for variable coefficients problem are also presented.