scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Both subjective and objective medication adherence measures are reviewed, including direct measures, those involving secondary database analysis, electronic medication packaging devices, pill count, and clinician assessments and self-report.
Abstract: WHO reported that adherence among patients with chronic diseases averages only 50% in developed countries. This is recognized as a significant public health issue, since medication nonadherence leads to poor health outcomes and increased healthcare costs. Improving medication adherence is, therefore, crucial and revealed on many studies, suggesting interventions can improve medication adherence. One significant aspect of the strategies to improve medication adherence is to understand its magnitude. However, there is a lack of general guidance for researchers and healthcare professionals to choose the appropriate tools that can explore the extent of medication adherence and the reasons behind this problem in order to orchestrate subsequent interventions. This paper reviews both subjective and objective medication adherence measures, including direct measures, those involving secondary database analysis, electronic medication packaging (EMP) devices, pill count, and clinician assessments and self-report. Subjective measures generally provide explanations for patient's nonadherence whereas objective measures contribute to a more precise record of patient's medication-taking behavior. While choosing a suitable approach, researchers and healthcare professionals should balance the reliability and practicality, especially cost effectiveness, for their purpose. Meanwhile, because a perfect measure does not exist, a multimeasure approach seems to be the best solution currently.

812 citations


Journal ArticleDOI
25 Mar 2016-Science
TL;DR: To contribute data about replicability in economics, 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014 are replicated, finding that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction.
Abstract: The replicability of some scientific findings has recently been called into question. To contribute data about replicability in economics, we replicated 18 studies published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014. All of these replications followed predefined analysis plans that were made publicly available beforehand, and they all have a statistical power of at least 90% to detect the original effect size at the 5% significance level. We found a significant effect in the same direction as in the original study for 11 replications (61%); on average, the replicated effect size is 66% of the original. The replicability rate varies between 67% and 78% for four additional replicability indicators, including a prediction market measure of peer beliefs.

811 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a new analysis of global process emissions from cement production and show that global process CO2 emissions in 2016 were 1.45±0.20 metric tonne CO2, equivalent to about 4% of emissions from fossil fuels.
Abstract: . The global production of cement has grown very rapidly in recent years, and after fossil fuels and land-use change, it is the third-largest source of anthropogenic emissions of carbon dioxide. The required data for estimating emissions from global cement production are poor, and it has been recognised that some global estimates are significantly inflated. Here we assemble a large variety of available datasets and prioritise official data and emission factors, including estimates submitted to the UNFCCC plus new estimates for China and India, to present a new analysis of global process emissions from cement production. We show that global process emissions in 2016 were 1.45±0.20 Gt CO2, equivalent to about 4 % of emissions from fossil fuels. Cumulative emissions from 1928 to 2016 were 39.3±2.4 Gt CO2, 66 % of which have occurred since 1990. Emissions in 2015 were 30 % lower than those recently reported by the Global Carbon Project. The data associated with this article can be found at https://doi.org/10.5281/zenodo.831455 .

811 citations


Journal ArticleDOI
01 Jun 2018-Nature
TL;DR: This work focuses on the current understanding of tree hydraulic performance under drought, the identification of physiological thresholds that precipitate mortality and the mechanisms of recovery after drought, and the potential application of hydraulic thresholds to process-based models that predict mortality.
Abstract: Severe droughts have caused widespread tree mortality across many forest biomes with profound effects on the function of ecosystems and carbon balance. Climate change is expected to intensify regional-scale droughts, focusing attention on the physiological basis of drought-induced tree mortality. Recent work has shown that catastrophic failure of the plant hydraulic system is a principal mechanism involved in extensive crown death and tree mortality during drought, but the multi-dimensional response of trees to desiccation is complex. Here we focus on the current understanding of tree hydraulic performance under drought, the identification of physiological thresholds that precipitate mortality and the mechanisms of recovery after drought. Building on this, we discuss the potential application of hydraulic thresholds to process-based models that predict mortality.

811 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.7% on COCO test-dev and extreme point guided segmentation further improves this to 34.6% Mask AP.
Abstract: With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.7% on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9%, much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6% Mask AP.

811 citations


Journal ArticleDOI
TL;DR: The 1992 "World Scientists' Warning to Humanity" as mentioned in this paper warned that humans were on a collision course with the natural world and that fundamental changes were urgently needed to avoid the consequences our present course would bring.
Abstract: Twenty-five years ago, the Union of Concerned Scientists and more than 1700 independent scientists, including the majority of living Nobel laureates in the sciences, penned the 1992 "World Scientists’ Warning to Humanity" (see supplemental file S1). These concerned professionals called on humankind to curtail environmental destruction and cautioned that "a great change in our stewardship of the Earth and the life on it is required, if vast human misery is to be avoided." In their manifesto, they showed that humans were on a collision course with the natural world. They expressed concern about current, impending, or potential damage on planet Earth involving ozone depletion, freshwater availability, marine life depletion, ocean dead zones, forest loss, biodiversity destruction, climate change, and continued human population growth. They proclaimed that fundamental changes were urgently needed to avoid the consequences our present course would bring.

811 citations


Journal ArticleDOI
09 Feb 2018-Science
TL;DR: A comprehensive systems-level view of the neurobiological architecture of major neuropsychiatric illness demonstrates pathways of molecular convergence and specificity as well as a substantial causal genetic component.
Abstract: The predisposition to neuropsychiatric disease involves a complex, polygenic, and pleiotropic genetic architecture. However, little is known about how genetic variants impart brain dysfunction or pathology. We used transcriptomic profiling as a quantitative readout of molecular brain-based phenotypes across five major psychiatric disorders-autism, schizophrenia, bipolar disorder, depression, and alcoholism-compared with matched controls. We identified patterns of shared and distinct gene-expression perturbations across these conditions. The degree of sharing of transcriptional dysregulation is related to polygenic (single-nucleotide polymorphism-based) overlap across disorders, suggesting a substantial causal genetic component. This comprehensive systems-level view of the neurobiological architecture of major neuropsychiatric illness demonstrates pathways of molecular convergence and specificity.

811 citations


01 Jan 2016
TL;DR: Rodents evolve significantly faster than humans as discussed by the authors, and the ratio of the number of nucleotide substitutions in the rodent lineage to that in the human lineage since their divergence is 2.0 for synonymous substitutions and 1.3 for nonsynonymous substitutions.
Abstract: When the coding regions of 11 genes from ro- dents (mouse or rat) and man are compared with those from another mammalian species (usually bovine), it is found that rodents evolve significantly faster than man. The ratio of the number of nucleotide substitutions in the rodent lineage to that in the human lineage since their divergence is 2.0 for synony- mous substitutions and 1.3 for nonsynonymous substitutions. Rodents also evolve faster in the 5' and 3' untranslated regions of five different mRNAs; the ratios are 2.6 and 3.1, respective- ly. The numbers of nucleotide substitutions between members of the j3-globin gene family that were duplicated before the man-mouse split are also higher in mouse than in man. The difference is, again, greater for synonymous substitutions than for nonsynonymous substitutions. This tendency is more con- sistent with the neutralist view of molecular evolution than with the selectionist view. A simple explanation for the higher rates in rodents is that rodents have shorter generation times and, thus, higher mutation rates. The implication of our find- idgs for the study of molecular phylogeny is discussed.

811 citations


Journal ArticleDOI
TL;DR: Plasma TMAO levels are both elevated in patients with CKD and portend poorer long-term survival and chronic dietary exposures that increase TmaO directly contributes to progressive renal fibrosis and dysfunction in animal models.
Abstract: Rationale: Trimethylamine- N -oxide (TMAO), a gut microbial-dependent metabolite of dietary choline, phosphatidylcholine (lecithin), and l-carnitine, is elevated in chronic kidney diseases (CKD) and associated with coronary artery disease pathogenesis. Objective: To both investigate the clinical prognostic value of TMAO in subjects with versus without CKD, and test the hypothesis that TMAO plays a direct contributory role in the development and progression of renal dysfunction. Methods and Results: We first examined the relationship between fasting plasma TMAO and all-cause mortality over 5-year follow-up in 521 stable subjects with CKD (estimated glomerular filtration rate, 2 ). Median TMAO level among CKD subjects was 7.9 μmol/L (interquartile range, 5.2–12.4 μmol/L), which was markedly higher ( P P P P =0.036). Among non-CKD subjects, elevated TMAO levels portend poorer prognosis within cohorts of high and low cystatin C. In animal models, elevated dietary choline or TMAO directly led to progressive renal tubulointerstitial fibrosis and dysfunction. Conclusions: Plasma TMAO levels are both elevated in patients with CKD and portend poorer long-term survival. Chronic dietary exposures that increase TMAO directly contributes to progressive renal fibrosis and dysfunction in animal models.

811 citations


Journal ArticleDOI
TL;DR: The results support the feasibility of discovery-based approaches using next-generation sequencing technologies to identify signaling pathways for targeting in the development of personalized therapies for patients with pulmonary fibrosis.
Abstract: Rationale: The contributions of diverse cell populations in the human lung to pulmonary fibrosis pathogenesis are poorly understood. Single-cell RNA sequencing can reveal changes within individual cell populations during pulmonary fibrosis that are important for disease pathogenesis. Objectives: To determine whether single-cell RNA sequencing can reveal disease-related heterogeneity within alveolar macrophages, epithelial cells, or other cell types in lung tissue from subjects with pulmonary fibrosis compared with control subjects. Methods: We performed single-cell RNA sequencing on lung tissue obtained from eight transplant donors and eight recipients with pulmonary fibrosis and on one bronchoscopic cryobiospy sample from a patient with idiopathic pulmonary fibrosis. We validated these data using in situ RNA hybridization, immunohistochemistry, and bulk RNA-sequencing on flow-sorted cells from 22 additional subjects. Measurements and Main Results: We identified a distinct, novel population of profibrotic alveolar macrophages exclusively in patients with fibrosis. Within epithelial cells, the expression of genes involved in Wnt secretion and response was restricted to nonoverlapping cells. We identified rare cell populations including airway stem cells and senescent cells emerging during pulmonary fibrosis. We developed a web-based tool to explore these data. Conclusions: We generated a single-cell atlas of pulmonary fibrosis. Using this atlas, we demonstrated heterogeneity within alveolar macrophages and epithelial cells from subjects with pulmonary fibrosis. These results support the feasibility of discovery-based approaches using next-generation sequencing technologies to identify signaling pathways for targeting in the development of personalized therapies for patients with pulmonary fibrosis.

811 citations


Journal ArticleDOI
TL;DR: A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.
Abstract: Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide “obviously” interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: An object detection system that relies on a multi-region deep convolutional neural network that also encodes semantic segmentation-aware features that aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization.
Abstract: We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2% and 73.9% correspondingly, surpassing any other published work by a significant margin.

Journal ArticleDOI
TL;DR: It is suggested that the neoadjuvant administration of PD-1 blockade enhances both the local and systemic antitumor immune response and may represent a more efficacious approach to the treatment of this uniformly lethal brain tumor.
Abstract: Glioblastoma is the most common primary malignant brain tumor in adults and is associated with poor survival. The Ivy Foundation Early Phase Clinical Trials Consortium conducted a randomized, multi-institution clinical trial to evaluate immune responses and survival following neoadjuvant and/or adjuvant therapy with pembrolizumab in 35 patients with recurrent, surgically resectable glioblastoma. Patients who were randomized to receive neoadjuvant pembrolizumab, with continued adjuvant therapy following surgery, had significantly extended overall survival compared to patients that were randomized to receive adjuvant, post-surgical programmed cell death protein 1 (PD-1) blockade alone. Neoadjuvant PD-1 blockade was associated with upregulation of T cell- and interferon-γ-related gene expression, but downregulation of cell-cycle-related gene expression within the tumor, which was not seen in patients that received adjuvant therapy alone. Focal induction of programmed death-ligand 1 in the tumor microenvironment, enhanced clonal expansion of T cells, decreased PD-1 expression on peripheral blood T cells and a decreasing monocytic population was observed more frequently in the neoadjuvant group than in patients treated only in the adjuvant setting. These findings suggest that the neoadjuvant administration of PD-1 blockade enhances both the local and systemic antitumor immune response and may represent a more efficacious approach to the treatment of this uniformly lethal brain tumor.


Journal ArticleDOI
24 May 2016-JAMA
TL;DR: To determine whether early initiation of RRT in patients who are critically ill with AKI reduces 90-day all-cause mortality, a single-center randomized clinical trial of 231 critically ill patients with KDIGO stage 2 found that more patients in the early group recovered renal function by day 90.
Abstract: Importance Optimal timing of initiation of renal replacement therapy (RRT) for severe acute kidney injury (AKI) but without life-threatening indications is still unknown. Objective To determine whether early initiation of RRT in patients who are critically ill with AKI reduces 90-day all-cause mortality. Design, Setting, and Participants Single-center randomized clinical trial of 231 critically ill patients with AKI Kidney Disease: Improving Global Outcomes (KDIGO) stage 2 (≥2 times baseline or urinary output Interventions Early (within 8 hours of diagnosis of KDIGO stage 2; n = 112) or delayed (within 12 hours of stage 3 AKI or no initiation; n = 119) initiation of RRT. Main Outcomes and Measures The primary end point was mortality at 90 days after randomization. Secondary end points included 28- and 60-day mortality, clinical evidence of organ dysfunction, recovery of renal function, requirement of RRT after day 90, duration of renal support, and intensive care unit (ICU) and hospital length of stay. Results Among 231 patients (mean age, 67 years; men, 146 [63.2%]), all patients in the early group (n = 112) and 108 of 119 patients (90.8%) in the delayed group received RRT. All patients completed follow-up at 90 days. Median time (Q1, Q3) from meeting full eligibility criteria to RRT initiation was significantly shorter in the early group (6.0 hours [Q1, Q3: 4.0, 7.0]) than in the delayed group (25.5 h [Q1, Q3: 18.8, 40.3]; difference, −21.0 [95% CI, −24.0 to −18.0]; P P = .03). More patients in the early group recovered renal function by day 90 (60 of 112 patients [53.6%] in the early group vs 46 of 119 patients [38.7%] in the delayed group; odds ratio [OR], 0.55 [95% CI, 0.32 to 0. 93]; difference, 14.9% [95% CI, 2.2% to 27.6%]; P = .02). Duration of RRT and length of hospital stay were significantly shorter in the early group than in the delayed group (RRT: 9 days [Q1, Q3: 4, 44] in the early group vs 25 days [Q1, Q3: 7, >90] in the delayed group; P = .04; HR, 0.69 [95% CI, 0.48 to 1.00]; difference, −18 days [95% CI, −41 to 4]; hospital stay: 51 days [Q1, Q3: 31, 74] in the early group vs 82 days [Q1, Q3: 67, >90] in the delayed group; P Conclusions and Relevance Among critically ill patients with AKI, early RRT compared with delayed initiation of RRT reduced mortality over the first 90 days. Further multicenter trials of this intervention are warranted. Trial Registration German Clinical Trial Registry Identifier:DRKS00004367

Journal ArticleDOI
TL;DR: Batman, a Python package for modeling exoplanet transit light curves, supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically.
Abstract: I introduce batman, a Python package for modeling exoplanet transit and eclipse light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 s with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman.

Journal ArticleDOI
TL;DR: The finding that small non-coding RNAs (ncRNAs) are able to control gene expression in a sequence specific manner has had a massive impact on biology and it is becoming evident that miRNAs also have specific nuclear functions.
Abstract: The finding that small non-coding RNAs (ncRNAs) are able to control gene expression in a sequence specific manner has had a massive impact on biology. Recent improvements in high throughput sequencing and computational prediction methods have allowed the discovery and classification of several types of ncRNAs. Based on their precursor structures, biogenesis pathways and modes of action, ncRNAs are classified as small interfering RNAs (siRNAs), microRNAs (miRNAs), PIWI-interacting RNAs (piRNAs), endogenous small interfering RNAs (endo-siRNAs or esiRNAs), promoter associate RNAs (pRNAs), small nucleolar RNAs (snoRNAs) and sno-derived RNAs. Among these, miRNAs appear as important cytoplasmic regulators of gene expression. miRNAs act as post-transcriptional regulators of their messenger RNA (mRNA) targets via mRNA degradation and/or translational repression. However, it is becoming evident that miRNAs also have specific nuclear functions. Among these, the most studied and debated activity is the miRNA-guided transcriptional control of gene expression. Although available data detail quite precisely the effectors of this activity, the mechanisms by which miRNAs identify their gene targets to control transcription are still a matter of debate. Here, we focus on nuclear functions of miRNAs and on alternative mechanisms of target recognition, at the promoter lavel, by miRNAs in carrying out transcriptional gene silencing.

Posted Content
TL;DR: Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.
Abstract: Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.

Journal ArticleDOI
22 Jan 2016-Science
TL;DR: To correct DMD by skipping mutant dystrophin exons in postnatal muscle tissue in vivo, adeno-associated virus–9 (AAV9) is used to deliver gene-editing components to postnatal mdx mice, a model of DMD and other monogenic disorders after birth.
Abstract: CRISPR/Cas9-mediated genome editing holds clinical potential for treating genetic diseases, such as Duchenne muscular dystrophy (DMD), which is caused by mutations in the dystrophin gene. To correct DMD by skipping mutant dystrophin exons in postnatal muscle tissue in vivo, we used adeno-associated virus–9 (AAV9) to deliver gene-editing components to postnatal mdx mice, a model of DMD. Different modes of AAV9 delivery were systematically tested, including intraperitoneal at postnatal day 1 (P1), intramuscular at P12, and retro-orbital at P18. Each of these methods restored dystrophin protein expression in cardiac and skeletal muscle to varying degrees, and expression increased from 3 to 12 weeks after injection. Postnatal gene editing also enhanced skeletal muscle function, as measured by grip strength tests 4 weeks after injection. This method provides a potential means of correcting mutations responsible for DMD and other monogenic disorders after birth.

Journal ArticleDOI
TL;DR: In patients with early-stage breast cancer, irradiation of the regional nodes had a marginal effect on overall survival, disease-free survival and distant disease- free survival were improved, and breast-cancer mortality was reduced.
Abstract: BACKGROUND The effect of internal mammary and medial supraclavicular lymph-node irradiation (regional nodal irradiation) added to whole-breast or thoracic-wall irradiation after surgery on survival among women with early-stage breast cancer is unknown. METHODS We randomly assigned women who had a centrally or medially located primary tumor, irrespective of axillary involvement, or an externally located tumor with axillary involvement to undergo either whole-breast or thoracic-wall irradiation in addition to regional nodal irradiation (nodal-irradiation group) or whole-breast or thoracic-wall irradiation alone (control group). The primary end point was overall survival. Secondary end points were the rates of disease-free survival, survival free from distant disease, and death from breast cancer. RESULTS Between 1996 and 2004, a total of 4004 patients underwent randomization. The majority of patients (76.1%) underwent breast-conserving surgery. After mastectomy, 73.4% of the patients in both groups underwent chest-wall irradiation. Nearly all patients with node-positive disease (99.0%) and 66.3% of patients with node-negative disease received adjuvant systemic treatment. At a median follow-up of 10.9 years, 811 patients had died. At 10 years, overall survival was 82.3% in the nodal-irradiation group and 80.7% in the control group (hazard ratio for death with nodal irradiation, 0.87; 95% confidence interval [CI], 0.76 to 1.00; P = 0.06). The rate of disease-free survival was 72.1% in the nodal-irradiation group and 69.1% in the control group (hazard ratio for disease progression or death, 0.89; 95% CI, 0.80 to 1.00; P = 0.04), the rate of distant disease-free survival was 78.0% versus 75.0% (hazard ratio, 0.86; 95% CI, 0.76 to 0.98; P = 0.02), and breast-cancer mortality was 12.5% versus 14.4% (hazard ratio, 0.82; 95% CI, 0.70 to 0.97; P = 0.02). Acute side effects of regional nodal irradiation were modest. CONCLUSIONS In patients with early-stage breast cancer, irradiation of the regional nodes had a marginal effect on overall survival. Disease-free survival and distant disease-free survival were improved, and breast-cancer mortality was reduced. (Funded by Fonds Cancer; ClinicalTrials.gov number, NCT00002851.)

Journal ArticleDOI
TL;DR: A comprehensive and up-to-date review of the state-of-the-art (SOTA) in AutoML methods according to the pipeline, covering data preparation, feature engineering, hyperparameter optimization, and neural architecture search (NAS).
Abstract: Deep learning (DL) techniques have obtained remarkable achievements on various tasks, such as image recognition, object detection, and language modeling. However, building a high-quality DL system for a specific task highly relies on human expertise, hindering its wide application. Meanwhile, automated machine learning (AutoML) is a promising solution for building a DL system without human assistance and is being extensively studied. This paper presents a comprehensive and up-to-date review of the state-of-the-art (SOTA) in AutoML. According to the DL pipeline, we introduce AutoML methods – covering data preparation, feature engineering, hyperparameter optimization, and neural architecture search (NAS) – with a particular focus on NAS, as it is currently a hot sub-topic of AutoML. We summarize the representative NAS algorithms’ performance on the CIFAR-10 and ImageNet datasets and further discuss the following subjects of NAS methods: one/two-stage NAS, one-shot NAS, joint hyperparameter and architecture optimization, and resource-aware NAS. Finally, we discuss some open problems related to the existing AutoML methods for future research.

Journal ArticleDOI
Lorenzo Galluzzi1, J M Bravo-San Pedro2, Ilio Vitale, Stuart A. Aaronson3, John M. Abrams4, Dieter Adam5, Emad S. Alnemri6, Lucia Altucci7, David W. Andrews8, Margherita Annicchiarico-Petruzzelli, Eric H. Baehrecke9, Nicolas G. Bazan10, Mathieu J.M. Bertrand11, Mathieu J.M. Bertrand12, Katiuscia Bianchi13, Katiuscia Bianchi14, Mikhail V. Blagosklonny15, Klas Blomgren16, Christoph Borner17, Dale E. Bredesen18, Dale E. Bredesen19, Catherine Brenner20, Catherine Brenner21, Michelangelo Campanella22, Eleonora Candi23, Francesco Cecconi23, Francis Ka-Ming Chan9, Navdeep S. Chandel24, Emily H. Cheng25, Jerry E. Chipuk3, John A. Cidlowski26, Aaron Ciechanover27, Ted M. Dawson28, Valina L. Dawson28, V De Laurenzi29, R De Maria, Klaus-Michael Debatin30, N. Di Daniele23, Vishva M. Dixit31, Brian David Dynlacht32, Wafik S. El-Deiry33, Gian Maria Fimia34, Richard A. Flavell35, Simone Fulda36, Carmen Garrido37, Marie-Lise Gougeon38, Douglas R. Green, Hinrich Gronemeyer39, György Hajnóczky6, J M Hardwick28, Michael O. Hengartner40, Hidenori Ichijo41, Bertrand Joseph16, Philipp J. Jost42, Thomas Kaufmann43, Oliver Kepp2, Daniel J. Klionsky44, Richard A. Knight22, Richard A. Knight45, Sharad Kumar46, Sharad Kumar47, John J. Lemasters48, Beth Levine49, Beth Levine50, Andreas Linkermann5, Stuart A. Lipton, Richard A. Lockshin51, Carlos López-Otín52, Enrico Lugli, Frank Madeo53, Walter Malorni54, Jean-Christophe Marine55, Seamus J. Martin56, J-C Martinou57, Jan Paul Medema58, Pascal Meier, Sonia Melino23, Noboru Mizushima41, Ute M. Moll59, Cristina Muñoz-Pinedo, Gabriel Núñez44, Andrew Oberst60, Theocharis Panaretakis16, Josef M. Penninger, Marcus E. Peter24, Mauro Piacentini23, Paolo Pinton61, Jochen H. M. Prehn62, Hamsa Puthalakath63, Gabriel A. Rabinovich64, Kodi S. Ravichandran65, Rosario Rizzuto66, Cecília M. P. Rodrigues67, David C. Rubinsztein68, Thomas Rudel69, Yufang Shi70, Hans-Uwe Simon43, Brent R. Stockwell49, Brent R. Stockwell71, Gyorgy Szabadkai22, Gyorgy Szabadkai66, Stephen W.G. Tait72, H. L. Tang28, Nektarios Tavernarakis73, Nektarios Tavernarakis74, Yoshihide Tsujimoto, T Vanden Berghe11, T Vanden Berghe12, Peter Vandenabeele11, Peter Vandenabeele12, Andreas Villunger75, Erwin F. Wagner76, Henning Walczak22, Eileen White77, W. G. Wood78, Junying Yuan79, Zahra Zakeri80, Boris Zhivotovsky16, Boris Zhivotovsky81, Gerry Melino45, Gerry Melino23, Guido Kroemer1 
Paris Descartes University1, Institut Gustave Roussy2, Mount Sinai Hospital3, University of Texas Southwestern Medical Center4, University of Kiel5, Thomas Jefferson University6, Seconda Università degli Studi di Napoli7, University of Toronto8, University of Massachusetts Medical School9, Louisiana State University10, Flanders Institute for Biotechnology11, Ghent University12, Cancer Research UK13, Queen Mary University of London14, Roswell Park Cancer Institute15, Karolinska Institutet16, University of Freiburg17, University of California, San Francisco18, Buck Institute for Research on Aging19, French Institute of Health and Medical Research20, Université Paris-Saclay21, University College London22, University of Rome Tor Vergata23, Northwestern University24, Memorial Sloan Kettering Cancer Center25, National Institutes of Health26, Technion – Israel Institute of Technology27, Johns Hopkins University28, University of Chieti-Pescara29, University of Ulm30, Genentech31, New York University32, Pennsylvania State University33, University of Salento34, Yale University35, Goethe University Frankfurt36, University of Burgundy37, Pasteur Institute38, University of Strasbourg39, University of Zurich40, University of Tokyo41, Technische Universität München42, University of Bern43, University of Michigan44, Medical Research Council45, University of South Australia46, University of Adelaide47, Medical University of South Carolina48, Howard Hughes Medical Institute49, University of Texas at Dallas50, St. John's University51, University of Oviedo52, University of Graz53, Istituto Superiore di Sanità54, Katholieke Universiteit Leuven55, Trinity College, Dublin56, University of Geneva57, University of Amsterdam58, Stony Brook University59, University of Washington60, University of Ferrara61, Royal College of Surgeons in Ireland62, La Trobe University63, University of Buenos Aires64, University of Virginia65, University of Padua66, University of Lisbon67, University of Cambridge68, University of Würzburg69, Soochow University (Suzhou)70, Columbia University71, University of Glasgow72, University of Crete73, Foundation for Research & Technology – Hellas74, Innsbruck Medical University75, Carlos III Health Institute76, Rutgers University77, University of Minnesota78, Harvard University79, City University of New York80, Moscow State University81
TL;DR: The Nomenclature Committee on Cell Death formulates a set of recommendations to help scientists and researchers to discriminate between essential and accessory aspects of cell death.
Abstract: Cells exposed to extreme physicochemical or mechanical stimuli die in an uncontrollable manner, as a result of their immediate structural breakdown. Such an unavoidable variant of cellular demise is generally referred to as ‘accidental cell death’ (ACD). In most settings, however, cell death is initiated by a genetically encoded apparatus, correlating with the fact that its course can be altered by pharmacologic or genetic interventions. ‘Regulated cell death’ (RCD) can occur as part of physiologic programs or can be activated once adaptive responses to perturbations of the extracellular or intracellular microenvironment fail. The biochemical phenomena that accompany RCD may be harnessed to classify it into a few subtypes, which often (but not always) exhibit stereotyped morphologic features. Nonetheless, efficiently inhibiting the processes that are commonly thought to cause RCD, such as the activation of executioner caspases in the course of apoptosis, does not exert true cytoprotective effects in the mammalian system, but simply alters the kinetics of cellular demise as it shifts its morphologic and biochemical correlates. Conversely, bona fide cytoprotection can be achieved by inhibiting the transduction of lethal signals in the early phases of the process, when adaptive responses are still operational. Thus, the mechanisms that truly execute RCD may be less understood, less inhibitable and perhaps more homogeneous than previously thought. Here, the Nomenclature Committee on Cell Death formulates a set of recommendations to help scientists and researchers to discriminate between essential and accessory aspects of cell death.

Proceedings ArticleDOI
12 Jul 2017
TL;DR: Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry.
Abstract: To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp planner also has the highest success rate on a dataset of 10 novel rigid objects and achieves 99% precision (one false positive out of 69 grasps classified as robust) on a dataset of 40 novel household objects, some of which are articulated or deformable. Code, datasets, videos, and supplementary material are available at http://berkeleyautomation.github.io/dex-net .

Journal ArticleDOI
TL;DR: The time is ripe for describing some of the recent development of superconducting devices, systems and applications as well as practical applications of QIP, such as computation and simulation in Physics and Chemistry.
Abstract: During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

Journal ArticleDOI
TL;DR: It is shown that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better and can increase the accuracy of species range predictions.
Abstract: High resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA Climatologies at high resolution for the earths land surface areas data of downscaled model output temperature and precipitation estimates of the ERA Interim climatic reanalysis to a high resolution of 30 arc seconds. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979 to 2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature but that its predictions of precipitation patterns are better.

Journal ArticleDOI
TL;DR: This study reviews recent advances in UQ methods used in deep learning and investigates the application of these methods in reinforcement learning (RL), and outlines a few important applications of UZ methods.
Abstract: Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes. It can be applied to solve a variety of real-world applications in science and engineering. Bayesian approximation and ensemble learning techniques are two most widely-used UQ methods in the literature. In this regard, researchers have proposed different UQ methods and examined their performance in a variety of applications such as computer vision (e.g., self-driving cars and object detection), image processing (e.g., image restoration), medical image analysis (e.g., medical image classification and segmentation), natural language processing (e.g., text classification, social media texts and recidivism risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ methods used in deep learning. Moreover, we also investigate the application of these methods in reinforcement learning (RL). Then, we outline a few important applications of UQ methods. Finally, we briefly highlight the fundamental research challenges faced by UQ methods and discuss the future research directions in this field.

Book ChapterDOI
Saining Xie1, Chen Sun1, Jonathan Huang1, Zhuowen Tu1, Kevin Murphy1 
08 Sep 2018
TL;DR: In this article, it was shown that it is possible to replace many of the expensive 3D convolutions by low-cost 2D convolution, and the best result was achieved when replacing the 3D CNNs at the bottom of the network, suggesting that temporal representation learning on high-level semantic features is more useful.
Abstract: Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification Three main challenges exist including spatial (image) feature representation, temporal information representation, and model/computation complexity It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning However, as for model/computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level “semantic” features is more useful Our conclusion generalizes to datasets with very different properties When combined with several other cost-effective designs including separable spatial/temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24)

Journal ArticleDOI
TL;DR: A global timetree of life synthesized from 2,274 studies representing 50,632 species and examined the pattern and rate of diversification as well as the timing of speciation suggests that speciation and diversification are processes dominated by random events and that adaptive change is largely a separate process.
Abstract: Genomic data are rapidly resolving the tree of living species calibrated to time, the timetree of life, which will provide a framework for research in diverse fields of science. Previous analyses of taxonomically restricted timetrees have found a decline in the rate of diversification in many groups of organisms, often attributed to ecological interactions among species. Here, we have synthesized a global timetree of life from 2,274 studies representing 50,632 species and examined the pattern and rate of diversification as well as the timing of speciation. We found that species diversity has been mostly expanding overall and in many smaller groups of species, and that the rate of diversification in eukaryotes has been mostly constant. We also identified, and avoided, potential biases that may have influenced previous analyses of diversification including low levels of taxon sampling, small clade size, and the inclusion of stem branches in clade analyses. We found consistency in time-to-speciation among plants and animals, ∼2 My, as measured by intervals of crown and stem species times. Together, this clock-like change at different levels suggests that speciation and diversification are processes dominated by random events and that adaptive change is largely a separate process.

Journal ArticleDOI
11 Oct 2017-Joule
TL;DR: In this paper, the authors track the metal content associated with compounds used in lithium-ion battery (LIB) and find that most of the key constituents, including manganese, nickel, and natural graphite, have sufficient supply to meet the anticipated increase in demand for LIBs.

Proceedings ArticleDOI
30 Jul 2015
TL;DR: It is shown that the observed features model is most effective at capturing the information present for entity pairs with textual relations, and a combination of the two combines the strengths of both model types.
Abstract: In this paper we show the surprising effectiveness of a simple observed features model in comparison to latent feature models on two benchmark knowledge base completion datasets, FB15K and WN18. We also compare latent and observed feature models on a more challenging dataset derived from FB15K, and additionally coupled with textual mentions from a web-scale corpus. We show that the observed features model is most effective at capturing the information present for entity pairs with textual relations, and a combination of the two combines the strengths of both model types.