scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
18 Nov 2016-Science
TL;DR: This device architecture and materials set will enable “all-perovskite” thin-film solar cells to reach the highest efficiencies in the long term at the lowest costs.
Abstract: The ready processability of organic-inorganic perovskite materials for solar cells should enable the fabrication of tandem solar cells, in which the top layer is tuned to absorb shorter wavelengths and the lower layer to absorb the remaining longer-wavelength light. The difficulty in making an all-perovskite cell is finding a material that absorbs the red end of the spectrum. Eperon et al. developed an infrared-absorbing mixed tin-lead material that can deliver 14.8% efficiency on its own and 20.3% efficiency in a four-terminal tandem cell. Science , this issue p. [861][1] [1]: /lookup/doi/10.1126/science.aaf9717

1,089 citations


Journal ArticleDOI
11 Aug 2016
TL;DR: PCOS can impact women’s reproductive health, leading to anovulatory infertility and higher rate of early pregnancy loss, and the risks of diabetes, cardiovascular disease, hypertension, metabolic syndrome, and endometrial cancer among PCOS patients are significantly increased.
Abstract: Polycystic ovary syndrome (PCOS) is characterized by a constellation of clinical symptoms that include irregular menses due to chronic oligo-ovulation, phenotypic features of hyperandrogenism, and obesity The term “polycystic ovary” refers to ovarian morphology with increased ovarian stroma and a ring of cortical follicles Core biochemical features include hyperandrogenism and insulin resistance The pathogenesis of PCOS remains a topic of debate Treatment of PCOS typically focuses on mitigating the impact of hyperandrogenism, insulin resistance, and chronic oligo-ovulation and restoring fertility when desired

1,089 citations


Journal ArticleDOI
TL;DR: In this article, the potential of lignocellulosic biomass as an alternative platform to fossil resources has been analyzed and a critical review provides insights into the potential for LBS.
Abstract: The demand for petroleum dependent chemicals and materials has been increasing despite the dwindling of their fossil resources. As the dead-end of petroleum based industry has started to appear, today's modern society has to implement alternative energy and valuable chemical resources immediately. Owing to the importance of lignocellulosic biomass being the most abundant and bio-renewable biomass on earth, this critical review provides insights into the potential of lignocellulosic biomass as an alternative platform to fossil resources. In this context, over 200 value-added compounds, which can be derived from lignocellulosic biomass by various treatment methods, are presented with their references. Lignocellulosic biomass based polymers and their commercial importance are also reported mainly in the frame of these compounds. This review article aims to draw the map of lignocellulosic biomass derived chemicals and their synthetic polymers, and to reveal the scope of this map in today's modern chemical and polymer industry.

1,089 citations


Journal ArticleDOI
TL;DR: In this review, O’Neill and Pearce discuss recent intriguing findings on metabolic changes regulating the function of macrophages and dendritic cells.
Abstract: Recent studies on intracellular metabolism in dendritic cells (DCs) and macrophages provide new insights on the functioning of these critical controllers of innate and adaptive immunity. Both cell types undergo profound metabolic reprogramming in response to environmental cues, such as hypoxia or nutrient alterations, but importantly also in response to danger signals and cytokines. Metabolites such as succinate and citrate have a direct impact on the functioning of macrophages. Immunogenicity and tolerogenicity of DCs is also determined by anabolic and catabolic processes, respectively. These findings provide new prospects for therapeutic manipulation in inflammatory diseases and cancer.

1,089 citations


Journal ArticleDOI
TL;DR: The findings confirm the value of the entanglement conceptualization of the hierarchical BDAC model, which has both direct and indirect impacts on FPER and confirm the strong mediating role of PODC in improving insights and enhancing FPER.

1,089 citations


Journal ArticleDOI
TL;DR: It is found that PUFA oxidation by lipoxygenases via a PHKG2-dependent iron pool is necessary for ferroptosis and that the covalent inhibition of the catalytic selenocysteine in Gpx4 prevents elimination of PUFA hydroperoxides; these findings suggest new strategies for controlling ferroPTosis in diverse contexts.
Abstract: Ferroptosis is form of regulated nonapoptotic cell death that is involved in diverse disease contexts. Small molecules that inhibit glutathione peroxidase 4 (GPX4), a phospholipid peroxidase, cause lethal accumulation of lipid peroxides and induce ferroptotic cell death. Although ferroptosis has been suggested to involve accumulation of reactive oxygen species (ROS) in lipid environments, the mediators and substrates of ROS generation and the pharmacological mechanism of GPX4 inhibition that generates ROS in lipid environments are unknown. We report here the mechanism of lipid peroxidation during ferroptosis, which involves phosphorylase kinase G2 (PHKG2) regulation of iron availability to lipoxygenase enzymes, which in turn drive ferroptosis through peroxidation of polyunsaturated fatty acids (PUFAs) at the bis-allylic position; indeed, pretreating cells with PUFAs containing the heavy hydrogen isotope deuterium at the site of peroxidation (D-PUFA) prevented PUFA oxidation and blocked ferroptosis. We further found that ferroptosis inducers inhibit GPX4 by covalently targeting the active site selenocysteine, leading to accumulation of PUFA hydroperoxides. In summary, we found that PUFA oxidation by lipoxygenases via a PHKG2-dependent iron pool is necessary for ferroptosis and that the covalent inhibition of the catalytic selenocysteine in Gpx4 prevents elimination of PUFA hydroperoxides; these findings suggest new strategies for controlling ferroptosis in diverse contexts.

1,089 citations


Journal ArticleDOI
TL;DR: This work provides an introduction to variational autoencoders and some important extensions, which provide a principled framework for learning deep latent-variable models and corresponding inference models.
Abstract: Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models. In this work, we provide an introduction to variational autoencoders and some important extensions.

1,089 citations


Journal ArticleDOI
TL;DR: This review discusses applications of this new breed of analysis approaches in regulatory genomics and cellular imaging, and provides background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights.
Abstract: Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.

1,088 citations


Journal ArticleDOI
TL;DR: It is demonstrated that loss of PTEN in tumor cells in preclinical models of melanoma inhibits T cell-mediated tumor killing and decreases T-cell trafficking into tumors, and support the rationale to explore combinations of immunotherapies and PI3K-AKT pathway inhibitors.
Abstract: T cell–mediated immunotherapies are promising cancer treatments. However, most patients still fail to respond to these therapies. The molecular determinants of immune resistance are poorly understood. We show that loss of PTEN in tumor cells in preclinical models of melanoma inhibits T cell–mediated tumor killing and decreases T-cell trafficking into tumors. In patients, PTEN loss correlates with decreased T-cell infiltration at tumor sites, reduced likelihood of successful T-cell expansion from resected tumors, and inferior outcomes with PD-1 inhibitor therapy. PTEN loss in tumor cells increased the expression of immunosuppressive cytokines, resulting in decreased T-cell infiltration in tumors, and inhibited autophagy, which decreased T cell–mediated cell death. Treatment with a selective PI3Kβ inhibitor improved the efficacy of both anti–PD-1 and anti–CTLA-4 antibodies in murine models. Together, these findings demonstrate that PTEN loss promotes immune resistance and support the rationale to explore combinations of immunotherapies and PI3K–AKT pathway inhibitors. Significance: This study adds to the growing evidence that oncogenic pathways in tumors can promote resistance to the antitumor immune response. As PTEN loss and PI3K–AKT pathway activation occur in multiple tumor types, the results support the rationale to further evaluate combinatorial strategies targeting the PI3K–AKT pathway to increase the efficacy of immunotherapy. Cancer Discov; 6(2); 202–16. ©2015 AACR. See related commentary by Rizvi and Chan, [p. 128][1] . This article is highlighted in the In This Issue feature, [p. 109][2] [1]: /lookup/volpage/6/128?iss=2 [2]: /lookup/volpage/6/109?iss=2

1,088 citations


Journal ArticleDOI
TL;DR: An overview of inflammation in AD is provided and a detailed coverage of a number of microglia‐related signaling mechanisms that have been implicated in AD are reviewed.

1,088 citations


Posted Content
TL;DR: A new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) is developed that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation.
Abstract: In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters A key challenge is to handle the increased amount of data and extended training time We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al, 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al, 2013a)) Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach

Journal ArticleDOI
TL;DR: LDpred is introduced, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel, and outperforms the approach of pruning followed by thresholding, particularly at large sample sizes.
Abstract: Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R(2) increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase.

Journal ArticleDOI
TL;DR: This review highlights aspects of cell biology in pattern-recognition receptor signaling by focusing on signals that originate from the cell surface, from endosomal compartments, and from within the cytosol.
Abstract: Receptors of the innate immune system detect conserved determinants of microbial and viral origin. Activation of these receptors initiates signaling events that culminate in an effective immune response. Recently, the view that innate immune signaling events rely on and operate within a complex cellular infrastructure has become an important framework for understanding the regulation of innate immunity. Compartmentalization within this infrastructure provides the cell with the ability to assign spatial information to microbial detection and regulate immune responses. Several cell biological processes play a role in the regulation of innate signaling responses; at the same time, innate signaling can engage cellular processes as a form of defense or to promote immunological memory. In this review, we highlight these aspects of cell biology in pattern-recognition receptor signaling by focusing on signals that originate from the cell surface, from endosomal compartments, and from within the cytosol.

Proceedings ArticleDOI
05 Jan 2016
TL;DR: A novel end-to-end neural model to extract entities and relations between them and compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8).
Abstract: We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional treestructured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the stateof-the-art feature-based model on end-toend relation extraction, achieving 12.1% and 5.7% relative error reductions in F1score on ACE2005 and ACE2004, respectively. We also show that our LSTMRNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.

Journal ArticleDOI
TL;DR: CVD is a major cause of mortality among people with T2DM, accounting for approximately half of all deaths over the study period, and overall CVD affects approximately 32.2% of all persons with T 2DM.
Abstract: Cardiovascular disease (CVD) is a common comorbidity in type 2 diabetes (T2DM). CVD’s prevalence has been growing over time. To estimate the current prevalence of CVD among adults with T2DM by reviewing literature published within the last 10 years (2007–March 2017). We searched Medline, Embase, and proceedings of major scientific meetings for original research documenting the prevalence of CVD in T2DM. CVD included stroke, myocardial infarction, angina pectoris, heart failure, ischemic heart disease, cardiovascular disease, coronary heart disease, atherosclerosis, and cardiovascular death. No restrictions were placed on country of origin or publication language. Two reviewers independently searched for articles and extracted data, adjudicating results through consensus. Data were summarized descriptively. Risk of bias was examined by applying the STROBE checklist. We analyzed data from 57 articles with 4,549,481 persons having T2DM. Europe produced the most articles (46%), followed by the Western Pacific/China (21%), and North America (13%). Overall in 4,549,481 persons with T2DM, 52.0% were male, 47.0% were obese, aged 63.6 ± 6.9 years old, with T2DM duration of 10.4 ± 3.7 years. CVD affected 32.2% overall (53 studies, N = 4,289,140); 29.1% had atherosclerosis (4 studies, N = 1153), 21.2% had coronary heart disease (42 articles, N = 3,833,200), 14.9% heart failure (14 studies, N = 601,154), 14.6% angina (4 studies, N = 354,743), 10.0% myocardial infarction (13 studies, N = 3,518,833) and 7.6% stroke (39 studies, N = 3,901,505). CVD was the cause of death in 9.9% of T2DM patients (representing 50.3% of all deaths). Risk of bias was low; 80 ± 12% of STROBE checklist items were adequately addressed. Globally, overall CVD affects approximately 32.2% of all persons with T2DM. CVD is a major cause of mortality among people with T2DM, accounting for approximately half of all deaths over the study period. Coronary artery disease and stroke were the major contributors.

Journal ArticleDOI
TL;DR: The international guideline for the assessment and management of PCOS provides clinicians with clear advice on best practice based on the best available evidence, expert multidisciplinary input and consumer preferences to promote consistent, evidence-based care and improve the experience and health outcomes of women with PCOS.
Abstract: Study Question What is the recommended assessment and management of women with polycystic ovary syndrome (PCOS), based on the best available evidence, clinical expertise, and consumer preference? Summary Answer International evidence-based guidelines including 166 recommendations and practice points, addressed prioritized questions to promote consistent, evidence-based care and improve the experience and health outcomes of women with PCOS. What Is Known Already Previous guidelines either lacked rigorous evidence-based processes, did not engage consumer and international multidisciplinary perspectives, or were outdated. Diagnosis of PCOS remains controversial and assessment and management are inconsistent. The needs of women with PCOS are not being adequately met and evidence practice gaps persist. Study Design, Size, Duration International evidence-based guideline development engaged professional societies and consumer organizations with multidisciplinary experts and women with PCOS directly involved at all stages. Appraisal of Guidelines for Research and Evaluation (AGREE) II-compliant processes were followed, with extensive evidence synthesis. The Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) framework was applied across evidence quality, feasibility, acceptability, cost, implementation and ultimately recommendation strength. Participants/Materials, Setting, Methods Governance included a six continent international advisory and a project board, five guideline development groups, and consumer and translation committees. Extensive health professional and consumer engagement informed guideline scope and priorities. Engaged international society-nominated panels included pediatrics, endocrinology, gynecology, primary care, reproductive endocrinology, obstetrics, psychiatry, psychology, dietetics, exercise physiology, public health and other experts, alongside consumers, project management, evidence synthesis, and translation experts. Thirty-seven societies and organizations covering 71 countries engaged in the process. Twenty face-to-face meetings over 15 months addressed 60 prioritized clinical questions involving 40 systematic and 20 narrative reviews. Evidence-based recommendations were developed and approved via consensus voting within the five guideline panels, modified based on international feedback and peer review, with final recommendations approved across all panels. Main Results and the Role of Chance The evidence in the assessment and management of PCOS is generally of low to moderate quality. The guideline provides 31 evidence based recommendations, 59 clinical consensus recommendations and 76 clinical practice points all related to assessment and management of PCOS. Key changes in this guideline include: i) considerable refinement of individual diagnostic criteria with a focus on improving accuracy of diagnosis; ii) reducing unnecessary testing; iii) increasing focus on education, lifestyle modification, emotional wellbeing and quality of life; and iv) emphasizing evidence based medical therapy and cheaper and safer fertility management. Limitations, Reasons for Caution Overall evidence is generally low to moderate quality, requiring significantly greater research in this neglected, yet common condition, especially around refining specific diagnostic features in PCOS. Regional health system variation is acknowledged and a process for guideline and translation resource adaptation is provided. Wider Implications of the Findings The international guideline for the assessment and management of PCOS provides clinicians with clear advice on best practice based on the best available evidence, expert multidisciplinary input and consumer preferences. Research recommendations have been generated and a comprehensive multifaceted dissemination and translation program supports the guideline with an integrated evaluation program. Study Funding/Competing Interest(S) The guideline was primarily funded by the Australian National Health and Medical Research Council of Australia (NHMRC) supported by a partnership with ESHRE and the American Society for Reproductive Medicine. Guideline development group members did not receive payment. Travel expenses were covered by the sponsoring organizations. Disclosures of conflicts of interest were declared at the outset and updated throughout the guideline process, aligned with NHMRC guideline processes. Full details of conflicts declared across the guideline development groups are available at https://www.monash.edu/medicine/sphpm/mchri/pcos/guideline in the Register of disclosures of interest. Of named authors, Dr Costello has declared shares in Virtus Health and past sponsorship from Merck Serono for conference presentations. Prof. Laven declared grants from Ferring, Euroscreen and personal fees from Ferring, Euroscreen, Danone and Titus Healthcare. Prof. Norman has declared a minor shareholder interest in an IVF unit. The remaining authors have no conflicts of interest to declare. The guideline was peer reviewed by special interest groups across our partner and collaborating societies and consumer organizations, was independently assessed against AGREEII criteria and underwent methodological review. This guideline was approved by all members of the guideline development groups and was submitted for final approval by the NHMRC.

Journal ArticleDOI
TL;DR: The results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.
Abstract: Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.

Posted Content
TL;DR: In this paper, the authors present a simple baseline that utilizes probabilities from softmax distributions for detecting misclassified or out-of-distribution examples, and assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition.
Abstract: We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: The experimental results show that the MIL method for anomaly detection achieves significant improvement on anomaly detection performance as compared to the state-of-the-art approaches, and the results of several recent deep learning baselines on anomalous activity recognition are provided.
Abstract: Surveillance videos are able to capture a variety of realistic anomalies. In this paper, we propose to learn anomalies by exploiting both normal and anomalous videos. To avoid annotating the anomalous segments or clips in training videos, which is very time consuming, we propose to learn anomaly through the deep multiple instance ranking framework by leveraging weakly labeled training videos, i.e. the training labels (anomalous or normal) are at video-level instead of clip-level. In our approach, we consider normal and anomalous videos as bags and video segments as instances in multiple instance learning (MIL), and automatically learn a deep anomaly ranking model that predicts high anomaly scores for anomalous video segments. Furthermore, we introduce sparsity and temporal smoothness constraints in the ranking loss function to better localize anomaly during training. We also introduce a new large-scale first of its kind dataset of 128 hours of videos. It consists of 1900 long and untrimmed real-world surveillance videos, with 13 realistic anomalies such as fighting, road accident, burglary, robbery, etc. as well as normal activities. This dataset can be used for two tasks. First, general anomaly detection considering all anomalies in one group and all normal activities in another group. Second, for recognizing each of 13 anomalous activities. Our experimental results show that our MIL method for anomaly detection achieves significant improvement on anomaly detection performance as compared to the state-of-the-art approaches. We provide the results of several recent deep learning baselines on anomalous activity recognition. The low recognition performance of these baselines reveals that our dataset is very challenging and opens more opportunities for future work. The dataset is available at: http://crcv.ucf.edu/projects/real-world/

Journal ArticleDOI
25 Sep 2015-Science
TL;DR: The solution-phase growth of single- and few-unit-cell-thick single-crystalline 2D hybrid perovskites of (C4H9NH3)2PbBr4 with well-defined square shape and large size are reported.
Abstract: Organic-inorganic hybrid perovskites, which have proved to be promising semiconductor materials for photovoltaic applications, have been made into atomically thin two-dimensional (2D) sheets. We report the solution-phase growth of single- and few-unit-cell-thick single-crystalline 2D hybrid perovskites of (C4H9NH3)2PbBr4 with well-defined square shape and large size. In contrast to other 2D materials, the hybrid perovskite sheets exhibit an unusual structural relaxation, and this structural change leads to a band gap shift as compared to the bulk crystal. The high-quality 2D crystals exhibit efficient photoluminescence, and color tuning could be achieved by changing sheet thickness as well as composition via the synthesis of related materials.

Journal ArticleDOI
TL;DR: Laroscopic surgery in patients with rectal cancer was associated with rates of locoregional recurrence and disease-free and overall survival similar to those for open surgery.
Abstract: Background Laparoscopic resection of colorectal cancer is widely used. However, robust evidence to conclude that laparoscopic surgery and open surgery have similar outcomes in rectal cancer is lacking. A trial was designed to compare 3-year rates of cancer recurrence in the pelvic or perineal area (locoregional recurrence) and survival after laparoscopic and open resection of rectal cancer. Methods In this international trial conducted in 30 hospitals, we randomly assigned patients with a solitary adenocarcinoma of the rectum within 15 cm of the anal verge, not invading adjacent tissues, and without distant metastases to undergo either laparoscopic or open surgery in a 2:1 ratio. The primary end point was locoregional recurrence 3 years after the index surgery. Secondary end points included disease-free and overall survival. Results A total of 1044 patients were included (699 in the laparoscopic-surgery group and 345 in the open-surgery group). At 3 years, the locoregional recurrence rate was 5.0% in the ...

Journal ArticleDOI
Shengnan Guo1, Youfang Lin1, Ning Feng1, Chao Song1, Huaiyu Wan1 
17 Jul 2019
TL;DR: Experiments on two real-world datasets from the Caltrans Performance Measurement System demonstrate that the proposed ASTGCN model outperforms the state-of-the-art baselines.
Abstract: Forecasting the traffic flows is a critical issue for researchers and practitioners in the field of transportation. However, it is very challenging since the traffic flows usually show high nonlinearities and complex patterns. Most existing traffic flow prediction methods, lacking abilities of modeling the dynamic spatial-temporal correlations of traffic data, thus cannot yield satisfactory prediction results. In this paper, we propose a novel attention based spatial-temporal graph convolutional network (ASTGCN) model to solve traffic flow forecasting problem. ASTGCN mainly consists of three independent components to respectively model three temporal properties of traffic flows, i.e., recent, daily-periodic and weekly-periodic dependencies. More specifically, each component contains two major parts: 1) the spatial-temporal attention mechanism to effectively capture the dynamic spatialtemporal correlations in traffic data; 2) the spatial-temporal convolution which simultaneously employs graph convolutions to capture the spatial patterns and common standard convolutions to describe the temporal features. The output of the three components are weighted fused to generate the final prediction results. Experiments on two real-world datasets from the Caltrans Performance Measurement System (PeMS) demonstrate that the proposed ASTGCN model outperforms the state-of-the-art baselines.

Journal ArticleDOI
TL;DR: In patients with severe Covid-19 not requiring mechanical ventilation, a randomized, open-label, phase 3 trial involving hospitalized patients with confirmed SARS-CoV-2 infection, oxygen saturation of 94% or less while they were breathing ambient air, and radiologic evidence of pneumonia, the magnitude of benefit cannot be determined.
Abstract: Background Remdesivir is an RNA polymerase inhibitor with potent antiviral activity in vitro and efficacy in animal models of coronavirus disease 2019 (Covid-19). Methods We conducted a ra...

Posted Content
TL;DR: The proposed DenseNets approach achieves state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining, and has much less parameters than currently published best entries for these datasets.
Abstract: State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions. Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train. In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets. Code to reproduce the experiments is available here : this https URL

Book ChapterDOI
08 Sep 2018
TL;DR: This paper proposes AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality and achieves state-of-the-art model compression results in a fully automated way without any human efforts.
Abstract: Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4\(\times \) FLOPs reduction, we achieved 2.7% better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53\(\times \) on the GPU (Titan Xp) and 1.95\(\times \) on an Android phone (Google Pixel 1), with negligible loss of accuracy.

Book
04 Sep 2015
TL;DR: This paper reports on a comparative case study of 13 industrial firms that implemented an enterprise resource planning (ERP) system and finds that both strong core teams and carefully managed consulting relationships addressed configuration knowledge barriers.
Abstract: This paper reports on a comparative case study of 13 industrial firms that implemented an enterprise resource planning (ERP) system. It compares firms based on their dialectic learning process. All firms had to overcome knowledge barriers of two types: those associated with the configuration of the ERP package, and those associated with the assimilation of new work processes. We found that both strong core teams and carefully managed consulting relationships addressed configuration knowledge barriers. User training that included both technical and business processes, along with a phased implementation approach, helped firms to overcome assimilation knowledge barriers. However, all firms in this study experienced ongoing concerns with assimilation knowledge barriers, and we observed two different approaches to address them. In a piecemeal approach, firms concentrated on the technology first and deferred consideration of process changes. In a concerted approach, both the technology and process changes were undertaken together. Although most respondents clearly stated a preference for either piecemeal or concerted change, all firms engaged in practices that reflected a combination of these approaches.

Journal ArticleDOI
TL;DR: In this article, the authors quantify the dose-response association between leisure time physical activity and mortality and define the upper limit of benefit or harm associated with increased levels of physical activity.
Abstract: Importance The 2008 Physical Activity Guidelines for Americans recommended a minimum of 75 vigorous-intensity or 150 moderate-intensity minutes per week (7.5 metabolic-equivalent hours per week) of aerobic activity for substantial health benefit and suggested additional benefits by doing more than double this amount. However, the upper limit of longevity benefit or possible harm with more physical activity is unclear. Objective To quantify the dose-response association between leisure time physical activity and mortality and define the upper limit of benefit or harm associated with increased levels of physical activity. Design, Setting, and Participants We pooled data from 6 studies in the National Cancer Institute Cohort Consortium (baseline 1992-2003). Population-based prospective cohorts in the United States and Europe with self-reported physical activity were analyzed in 2014. A total of 661 137 men and women (median age, 62 years; range, 21-98 years) and 116 686 deaths were included. We used Cox proportional hazards regression with cohort stratification to generate multivariable-adjusted hazard ratios (HRs) and 95% CIs. Median follow-up time was 14.2 years. Exposures Leisure time moderate- to vigorous-intensity physical activity. Main Outcomes and Measures The upper limit of mortality benefit from high levels of leisure time physical activity. Results Compared with individuals reporting no leisure time physical activity, we observed a 20% lower mortality risk among those performing less than the recommended minimum of 7.5 metabolic-equivalent hours per week (HR, 0.80 [95% CI, 0.78-0.82]), a 31% lower risk at 1 to 2 times the recommended minimum (HR, 0.69 [95% CI, 0.67-0.70]), and a 37% lower risk at 2 to 3 times the minimum (HR, 0.63 [95% CI, 0.62-0.65]). An upper threshold for mortality benefit occurred at 3 to 5 times the physical activity recommendation (HR, 0.61 [95% CI, 0.59-0.62]); however, compared with the recommended minimum, the additional benefit was modest (31% vs 39%). There was no evidence of harm at 10 or more times the recommended minimum (HR, 0.69 [95% CI, 0.59-0.78]). A similar dose-response relationship was observed for mortality due to cardiovascular disease and to cancer. Conclusions and Relevance Meeting the 2008 Physical Activity Guidelines for Americans minimum by either moderate- or vigorous-intensity activities was associated with nearly the maximum longevity benefit. We observed a benefit threshold at approximately 3 to 5 times the recommended leisure time physical activity minimum and no excess risk at 10 or more times the minimum. In regard to mortality, health care professionals should encourage inactive adults to perform leisure time physical activity and do not need to discourage adults who already participate in high-activity levels.

Journal ArticleDOI
TL;DR: In this article, a deep learning model based on Gated Recurrent Unit (GRU) is proposed to exploit the missing values and their missing patterns for effective imputation and improving prediction performance.
Abstract: Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a simulator, called iFogSim, to model IoT and fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost.
Abstract: Summary Internet of Things (IoT) aims to bring every object (eg, smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive volume of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay that is caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need an evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit of RAM consumption and execution time is verified under different circumstances.

Proceedings ArticleDOI
Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu1, Alan L. Yuille2 
27 Jun 2016
TL;DR: Zhang et al. as discussed by the authors propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location, which not only outperforms average and max-pooling, but also allows diagnostically visualize the importance of features at different positions and scales.
Abstract: Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.