scispace - formally typeset
Search or ask a question
Browse all papers

Posted Content
TL;DR: Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies.
Abstract: We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm. This "randomized smoothing" technique has been proposed recently in the literature, but existing guarantees are loose. We prove a tight robustness guarantee in $\ell_2$ norm for smoothing with Gaussian noise. We use randomized smoothing to obtain an ImageNet classifier with e.g. a certified top-1 accuracy of 49% under adversarial perturbations with $\ell_2$ norm less than 0.5 (=127/255). No certified defense has been shown feasible on ImageNet except for smoothing. On smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies. Our strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification. Code and models are available at this http URL.

719 citations


Journal ArticleDOI
TL;DR: In certain subgroups, PFS was positively associated with PD-L1 expression (KRAS, EGFR) and with smoking status (BRAF, HER2) and the lack of response in the ALK group was notable.

719 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the global surface temperature response to CO2 doubling has increased substantially in the Coupled Model Intercomparison Project phase 6 (CMIP6), with values spanning 1.8-5.6k across 27 GCMs and exceeding 4.5K in 10 of them.
Abstract: 15 Equilibrium climate sensitivity, the global surface temperature response to CO2 16 doubling, has been persistently uncertain. Recent consensus places it likely within 1.517 4.5K. Global climate models (GCMs), which attempt to represent all relevant physical 18 processes, provide the most direct means of estimating climate sensitivity via CO2 qua19 drupling experiments. Here we show that the closely related effective climate sensitiv20 ity has increased substantially in Coupled Model Intercomparison Project phase 6 (CMIP6), 21 with values spanning 1.8-5.6K across 27 GCMs and exceeding 4.5K in 10 of them. This 22 (statistically insignificant) increase is primarily due to stronger positive cloud feedbacks 23 from decreasing extratropical low cloud coverage and albedo. Both of these are tied to 24 the physical representation of clouds which in CMIP6 models lead to weaker responses 25 of extratropical low cloud cover and water content to unforced variations in surface tem26 perature. Establishing the plausibility of these higher sensitivity models is imperative 27 given their implied societal ramifications. 28 Plain Language Summary 29 The severity of climate change is closely related to how much the Earth warms in 30 response to greenhouse gas increases. Here we find that the temperature response to an 31 abrupt quadrupling of atmospheric carbon dioxide has increased substantially in the lat32 est generation of global climate models. This is primarily because low cloud water con33 tent and coverage decrease more strongly with global warming, causing enhanced plan34 etary absorption of sunlight – an amplifying feedback that ultimately results in more warm35 ing. Differences in the physical representation of clouds in models drive this enhanced 36 sensitivity relative to the previous generation of models. It is crucial to establish whether 37 the latest models, which presumably represent the climate system better than their pre38 decessors, are also providing a more realistic picture of future climate warming. 39

719 citations


Journal ArticleDOI
TL;DR: This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Abstract: Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

719 citations


Journal ArticleDOI
TL;DR: A semidirect VO that uses direct methods to track and triangulate pixels that are characterized by high image gradients, but relies on proven feature-based methods for joint optimization of structure and motion is proposed.
Abstract: Direct methods for visual odometry (VO) have gained popularity for their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, in which established feature-based methods succeed instead. Based on these considerations, we propose a semidirect VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients, but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy.

719 citations


Journal ArticleDOI
TL;DR: This paper introduces a novel polyp localization method for colonoscopy videos based on a model of appearance for polyps which defines polyp boundaries in terms of valley information and proves that this method outperforms state-of-the-art computational saliency results.

719 citations


Journal ArticleDOI
03 May 2019
TL;DR: The estimated percentages of patients who are eligible for and who respond to checkpoint inhibitor drugs are higher than reported estimates for drugs approved for genome-driven oncology but remain modest.
Abstract: Importance Immunotherapy checkpoint inhibitors have generated considerable interest because of durable responses in a number of hitherto intractable tumor types. Objective To estimate the percentage of patients with cancer in the United States who are eligible for and respond to checkpoint inhibitor drugs approved for oncology indications by the US Food and Drug Administration (FDA). Design, Setting, and Participants Retrospective cross-sectional study performed from June 2018 through October 2018 using publicly available data to determine (1) demographic characteristics of patients with advanced or metastatic cancer, (2) FDA data on checkpoint inhibitors approved from January 2011 through August 2018, (3) measures of response from drug labels, and (4) published reports estimating the frequency of various inclusion criteria. Main Outcomes and Measures The estimated percentages of US patients with cancer who are eligible for and who respond to immunotherapy checkpoint inhibitor drugs, by year. Results Six checkpoint inhibitor drugs were approved for 14 indications between March 25, 2011, and August 17, 2018. The estimated percentage of patients with cancer who were eligible for checkpoint inhibitor drugs increased from 1.54% (95% CI, 1.51%-1.57%) in 2011 to 43.63% (95% CI, 43.51%-43.75%) in 2018. The percentage of patients with cancer estimated to respond to checkpoint inhibitor drugs was 0.14% (95% CI, 0.13%-0.15%) in 2011 when ipilimumab was approved for unresectable or metastatic melanoma and increased to 5.86% (95% CI, 5.80%-5.92%) by 2015. By 2018, the estimated percentage of responders increased to 12.46% (95% CI, 12.37%-12.54%). Conclusions and Relevance The estimated percentages of patients who are eligible for and who respond to checkpoint inhibitor drugs are higher than reported estimates for drugs approved for genome-driven oncology but remain modest. Future research should explore biomarkers to maximize the benefit of immunotherapy among patients receiving it.

719 citations


Journal ArticleDOI
TL;DR: The results indicated that commonly observed polyamide particles can serve as a carrier of antibiotics in the aquatic environment and correlated positively with octanol-water partition coefficients (Log Kow).

719 citations


Journal ArticleDOI
26 Apr 2016-JAMA
TL;DR: A clinical decision tool to identify patients expected to derive benefit vs harm from continuing thienopyridine beyond 1 year after percutaneous coronary intervention is developed to inform dual antiplatelet therapy duration.
Abstract: Importance Dual antiplatelet therapy after percutaneous coronary intervention (PCI) reduces ischemia but increases bleeding. Objective To develop a clinical decision tool to identify patients expected to derive benefit vs harm from continuing thienopyridine beyond 1 year after PCI. Design, Setting, and Participants Among 11 648 randomized DAPT Study patients from 11 countries (August 2009-May 2014), a prediction rule was derived stratifying patients into groups to distinguish ischemic and bleeding risk 12 to 30 months after PCI. Validation was internal via bootstrap resampling and external among 8136 patients from 36 countries randomized in the PROTECT trial (June 2007-July 2014). Exposures Twelve months of open-label thienopyridine plus aspirin, then randomized to 18 months of continued thienopyridine plus aspirin vs placebo plus aspirin. Main Outcomes and Measures Ischemia (myocardial infarction or stent thrombosis) and bleeding (moderate or severe) 12 to 30 months after PCI. Results Among DAPT Study patients (derivation cohort; mean age, 61.3 years; women, 25.1%), ischemia occurred in 348 patients (3.0%) and bleeding in 215 (1.8%). Derivation cohort models predicting ischemia and bleeding hadcstatistics of 0.70 and 0.68, respectively. The prediction rule assigned 1 point each for myocardial infarction at presentation, prior myocardial infarction or PCI, diabetes, stent diameter less than 3 mm, smoking, and paclitaxel-eluting stent; 2 points each for history of congestive heart failure/low ejection fraction and vein graft intervention; −1 point for age 65 to younger than 75 years; and −2 points for age 75 years or older. Among the high score group (score ≥2, n = 5917), continued thienopyridine vs placebo was associated with reduced ischemic events (2.7% vs 5.7%; risk difference [RD], −3.0% [95% CI, −4.1% to −2.0%],P Conclusion and Relevance Among patients not sustaining major bleeding or ischemic events 1 year after PCI, a prediction rule assessing late ischemic and bleeding risks to inform dual antiplatelet therapy duration showed modest accuracy in derivation and validation cohorts. This rule requires further prospective evaluation to assess potential effects on patient care, as well as validation in other cohorts. Trial Registration clinicaltrials.gov Identifier:NCT00977938.

719 citations


Journal ArticleDOI
Marlee A. Tucker1, Katrin Böhning-Gaese1, William F. Fagan2, John M. Fryxell3, Bram Van Moorter, Susan C. Alberts4, Abdullahi H. Ali, Andrew M. Allen5, Andrew M. Allen6, Nina Attias7, Tal Avgar8, Hattie L. A. Bartlam-Brooks9, Buuveibaatar Bayarbaatar10, Jerrold L. Belant11, Alessandra Bertassoni12, Dean E. Beyer13, Laura R. Bidner14, Floris M. van Beest15, Stephen Blake10, Stephen Blake16, Niels Blaum17, Chloe Bracis1, Danielle D. Brown18, P J Nico de Bruyn19, Francesca Cagnacci20, Francesca Cagnacci21, Justin M. Calabrese2, Justin M. Calabrese22, Constança Camilo-Alves23, Simon Chamaillé-Jammes24, André Chiaradia25, André Chiaradia26, Sarah C. Davidson27, Sarah C. Davidson16, Todd E. Dennis28, Stephen DeStefano29, Duane R. Diefenbach30, Iain Douglas-Hamilton31, Iain Douglas-Hamilton32, Julian Fennessy, Claudia Fichtel33, Wolfgang Fiedler16, Christina Fischer34, Ilya R. Fischhoff35, Christen H. Fleming2, Christen H. Fleming22, Adam T. Ford36, Susanne A. Fritz1, Benedikt Gehr37, Jacob R. Goheen38, Eliezer Gurarie2, Eliezer Gurarie39, Mark Hebblewhite40, Marco Heurich41, Marco Heurich42, A. J. Mark Hewison43, Christian Hof, Edward Hurme2, Lynne A. Isbell14, René Janssen, Florian Jeltsch17, Petra Kaczensky44, Adam Kane45, Peter M. Kappeler33, Matthew J. Kauffman38, Roland Kays46, Roland Kays47, Duncan M. Kimuyu48, Flávia Koch33, Flávia Koch49, Bart Kranstauber37, Scott D. LaPoint50, Scott D. LaPoint16, Peter Leimgruber22, John D. C. Linnell, Pascual López-López51, A. Catherine Markham52, Jenny Mattisson, Emília Patrícia Medici53, Ugo Mellone54, Evelyn H. Merrill8, Guilherme Miranda de Mourão55, Ronaldo Gonçalves Morato, Nicolas Morellet43, Thomas A. Morrison56, Samuel L. Díaz-Muñoz57, Samuel L. Díaz-Muñoz14, Atle Mysterud58, Dejid Nandintsetseg1, Ran Nathan59, Aidin Niamir, John Odden, Robert B. O'Hara60, Luiz Gustavo R. Oliveira-Santos7, Kirk A. Olson10, Bruce D. Patterson61, Rogério Cunha de Paula, Luca Pedrotti, Björn Reineking62, Björn Reineking63, Martin Rimmler, Tracey L. Rogers64, Christer Moe Rolandsen, Christopher S. Rosenberry65, Daniel I. Rubenstein66, Kamran Safi16, Kamran Safi67, Sonia Saïd, Nir Sapir68, Hall Sawyer, Niels Martin Schmidt15, Nuria Selva69, Agnieszka Sergiel69, Enkhtuvshin Shiilegdamba10, João P. Silva70, João P. Silva71, João P. Silva72, Navinder J. Singh5, Erling Johan Solberg, Orr Spiegel14, Olav Strand, Siva R. Sundaresan, Wiebke Ullmann17, Ulrich Voigt44, Jake Wall32, David W. Wattles29, Martin Wikelski16, Martin Wikelski67, Christopher C. Wilmers73, John W. Wilson74, George Wittemyer75, George Wittemyer32, Filip Zięba, Tomasz Zwijacz-Kozica, Thomas Mueller1, Thomas Mueller22 
Goethe University Frankfurt1, University of Maryland, College Park2, University of Guelph3, Duke University4, Swedish University of Agricultural Sciences5, Radboud University Nijmegen6, Federal University of Mato Grosso do Sul7, University of Alberta8, Royal Veterinary College9, Wildlife Conservation Society10, Mississippi State University11, Sao Paulo State University12, Michigan Department of Natural Resources13, University of California, Davis14, Aarhus University15, Max Planck Society16, University of Potsdam17, Middle Tennessee State University18, Mammal Research Institute19, Edmund Mach Foundation20, Harvard University21, Smithsonian Conservation Biology Institute22, University of Évora23, University of Montpellier24, Parks Victoria25, Monash University26, Ohio State University27, Fiji National University28, University of Massachusetts Amherst29, United States Geological Survey30, University of Oxford31, Save the Elephants32, German Primate Center33, Technische Universität München34, Institute of Ecosystem Studies35, University of British Columbia36, University of Zurich37, University of Wyoming38, University of Washington39, University of Montana40, Bavarian Forest National Park41, University of Freiburg42, University of Toulouse43, University of Veterinary Medicine Vienna44, University College Cork45, North Carolina State University46, North Carolina Museum of Natural Sciences47, Karatina University48, University of Lethbridge49, Lamont–Doherty Earth Observatory50, University of Valencia51, Stony Brook University52, International Union for Conservation of Nature and Natural Resources53, University of Alicante54, Empresa Brasileira de Pesquisa Agropecuária55, University of Glasgow56, New York University57, University of Oslo58, Hebrew University of Jerusalem59, Norwegian University of Science and Technology60, Field Museum of Natural History61, University of Bayreuth62, University of Grenoble63, University of New South Wales64, Pennsylvania Game Commission65, Princeton University66, University of Konstanz67, University of Haifa68, Polish Academy of Sciences69, University of Porto70, Instituto Superior de Agronomia71, University of Lisbon72, University of California, Santa Cruz73, University of Pretoria74, Colorado State University75
26 Jan 2018-Science
TL;DR: Using a unique GPS-tracking database of 803 individuals across 57 species, it is found that movements of mammals in areas with a comparatively high human footprint were on average one-half to one-third the extent of their movements in area with a low human footprint.
Abstract: Animal movement is fundamental for ecosystem functioning and species survival, yet the effects of the anthropogenic footprint on animal movements have not been estimated across species. Using a unique GPS-tracking database of 803 individuals across 57 species, we found that movements of mammals in areas with a comparatively high human footprint were on average one-half to one-third the extent of their movements in areas with a low human footprint. We attribute this reduction to behavioral changes of individual animals and to the exclusion of species with long-range movements from areas with higher human impact. Global loss of vagility alters a key ecological trait of animals that affects not only population persistence but also ecosystem processes such as predator-prey interactions, nutrient cycling, and disease transmission.

719 citations


Journal ArticleDOI
TL;DR: In this article, the effect of decompressive craniectomy on clinical outcomes in patients with refractory traumatic intracranial hypertension remains unclear, and the primary outcome was the rating on the Extended Glasgow Outcome Scale (GOS-E) (an 8-point scale, ranging from death to upper good recovery) at 6 months.
Abstract: BackgroundThe effect of decompressive craniectomy on clinical outcomes in patients with refractory traumatic intracranial hypertension remains unclear. MethodsFrom 2004 through 2014, we randomly assigned 408 patients, 10 to 65 years of age, with traumatic brain injury and refractory elevated intracranial pressure (>25 mm Hg) to undergo decompressive craniectomy or receive ongoing medical care. The primary outcome was the rating on the Extended Glasgow Outcome Scale (GOS-E) (an 8-point scale, ranging from death to “upper good recovery” [no injury-related problems]) at 6 months. The primary-outcome measure was analyzed with an ordinal method based on the proportional-odds model. If the model was rejected, that would indicate a significant difference in the GOS-E distribution, and results would be reported descriptively. ResultsThe GOS-E distribution differed between the two groups (P<0.001). The proportional-odds assumption was rejected, and therefore results are reported descriptively. At 6 months, the GOS...

Journal ArticleDOI
Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam1, Ece Aşılar1  +2212 moreInstitutions (157)
TL;DR: A fully-fledged particle-flow reconstruction algorithm tuned to the CMS detector was developed and has been consistently used in physics analyses for the first time at a hadron collider as mentioned in this paper.
Abstract: The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMS performance for jet and hadronic τ decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8\TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.

Journal ArticleDOI
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study (GBDBS) 2019 as mentioned in this paper measured the global, regional, and national prevalence, disability-adjusted life-years (DALYS), years lived with disability (YLDs), and years of life lost (YLLs) for mental disorders from 1990 to 2019.

Proceedings Article
27 Sep 2018
TL;DR: This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank, and constructs a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.
Abstract: Neural message passing algorithms for semi-supervised classification on graphs have recently achieved great success. However, for classifying a node these methods only consider nodes that are a few propagation steps away and the size of this utilized neighborhood is hard to extend. In this paper, we use the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank. We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Our model's training time is on par or faster and its number of parameters on par or lower than previous models. It leverages a large, adjustable neighborhood for classification and can be easily combined with any neural network. We show that this model outperforms several recently proposed methods for semi-supervised classification in the most thorough study done so far for GCN-like models. Our implementation is available online.

Proceedings ArticleDOI
28 Jul 2019
TL;DR: It is found that the most important and confident heads play consistent and often linguistically-interpretable roles and when pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, it is observed that specialized heads are last to be pruned.
Abstract: Multi-head self-attention is a key component of the Transformer, a state-of-the-art architecture for neural machine translation. In this work we evaluate the contribution made by individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder. We find that the most important and confident heads play consistent and often linguistically-interpretable roles. When pruning heads using a method based on stochastic gates and a differentiable relaxation of the L0 penalty, we observe that specialized heads are last to be pruned. Our novel pruning method removes the vast majority of heads without seriously affecting performance. For example, on the English-Russian WMT dataset, pruning 38 out of 48 encoder heads results in a drop of only 0.15 BLEU.

Journal ArticleDOI
TL;DR: In this paper, an implementation structure of Industry 4.0, consisting of a multi-layered framework is described, and is shown how it can assist people in understanding and achieving the requirements of Industry 5.0.

Journal ArticleDOI
26 Mar 2015-Cell
TL;DR: There is a timely need to accelerate the understanding of the photosynthetic process in crops to allow informed and guided improvements via in-silico-assisted genetic engineering.

Journal ArticleDOI
TL;DR: The continuation of current patterns of population weight gain will lead to continuing increases in the future burden of cancer, and the need for a global effort to abate the increasing numbers of people with high BMI is emphasised.
Abstract: Summary Background High body-mass index (BMI; defined as 25 kg/m 2 or greater) is associated with increased risk of cancer. To inform public health policy and future research, we estimated the global burden of cancer attributable to high BMI in 2012. Methods In this population-based study, we derived population attributable fractions (PAFs) using relative risks and BMI estimates in adults by age, sex, and country. Assuming a 10-year lag-period between high BMI and cancer occurrence, we calculated PAFs using BMI estimates from 2002 and used GLOBOCAN2012 data to estimate numbers of new cancer cases attributable to high BMI. We also calculated the proportion of cancers that were potentially avoidable had populations maintained their mean BMIs recorded in 1982. We did secondary analyses to test the model and to estimate the effects of hormone replacement therapy (HRT) use and smoking. Findings Worldwide, we estimate that 481 000 or 3·6% of all new cancer cases in adults (aged 30 years and older after the 10-year lag period) in 2012 were attributable to high BMI. PAFs were greater in women than in men (5·4% vs 1·9%). The burden of attributable cases was higher in countries with very high and high human development indices (HDIs; PAF 5·3% and 4·8%, respectively) than in those with moderate (1·6%) and low HDIs (1·0%). Corpus uteri, postmenopausal breast, and colon cancers accounted for 63·6% of cancers attributable to high BMI. A quarter (about 118 000) of the cancer cases related to high BMI in 2012 could be attributed to the increase in BMI since 1982. Interpretation These findings emphasise the need for a global effort to abate the increasing numbers of people with high BMI. Assuming that the association between high BMI and cancer is causal, the continuation of current patterns of population weight gain will lead to continuing increases in the future burden of cancer. Funding World Cancer Research Fund International, European Commission (Marie Curie Intra-European Fellowship), Australian National Health and Medical Research Council, and US National Institutes of Health.

Journal ArticleDOI
TL;DR: It is shown quite generally that, in a steady state, the dispersion of observables, like the number of consumed or produced molecules or thenumber of steps of a motor, is constrained by the thermodynamic cost of generating it.
Abstract: Biomolecular systems like molecular motors or pumps, transcription and translation machinery, and other enzymatic reactions, can be described as Markov processes on a suitable network. We show quite generally that, in a steady state, the dispersion of observables, like the number of consumed or produced molecules or the number of steps of a motor, is constrained by the thermodynamic cost of generating it. An uncertainty $\ensuremath{\epsilon}$ requires at least a cost of $2{k}_{B}T/{\ensuremath{\epsilon}}^{2}$ independent of the time required to generate the output.

Journal ArticleDOI
TL;DR: A thermally activated delayed fluorescence material for organic light-emitting diodes is shown, which realizes both approximately 100% photoluminescence quantum yield and Approximately 100% up-conversion of the triplet to singlet excited state.
Abstract: Efficient organic light-emitting diodes have been developed using emitters containing rare metals, such as platinum and iridium complexes. However, there is an urgent need to develop emitters composed of more abundant materials. Here we show a thermally activated delayed fluorescence material for organic light-emitting diodes, which realizes both approximately 100% photoluminescence quantum yield and approximately 100% up-conversion of the triplet to singlet excited state. The material contains electron-donating diphenylaminocarbazole and electron-accepting triphenyltriazine moieties. The typical trade-off between effective emission and triplet-to-singlet up-conversion is overcome by fine-tuning the highest occupied molecular orbital and lowest unoccupied molecular orbital distributions. The nearly zero singlet–triplet energy gap, smaller than the thermal energy at room temperature, results in an organic light-emitting diode with external quantum efficiency of 29.6%. An external quantum efficiency of 41.5% is obtained when using an out-coupling sheet. The external quantum efficiency is 30.7% even at a high luminance of 3,000 cd m−2. Organic light-emitting diodes promise a more environment-friendly future for light sources, but many use rare metals. Here, the authors present an approach that achieves external quantum efficiency over 40% by realising 100% up-conversion from triplet to singlet excitons and thus 100% radiative emission.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper proposes an effective method that uses simple patch-based priors for both the background and rain layers that removes rain streaks better than the existing methods qualitatively and quantitatively.
Abstract: This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples.

Journal ArticleDOI
TL;DR: The value of an ultrasensitive single‐molecule array (Simoa) serum NfL (sNfL) assay in multiple sclerosis (MS) is explored.
Abstract: OBJECTIVE Neurofilament light chains (NfL) are unique to neuronal cells, are shed to the cerebrospinal fluid (CSF), and are detectable at low concentrations in peripheral blood. Various diseases causing neuronal damage have resulted in elevated CSF concentrations. We explored the value of an ultrasensitive single-molecule array (Simoa) serum NfL (sNfL) assay in multiple sclerosis (MS). METHODS sNfL levels were measured in healthy controls (HC, n = 254) and two independent MS cohorts: (1) cross-sectional with paired serum and CSF samples (n = 142), and (2) longitudinal with repeated serum sampling (n = 246, median follow-up = 3.1 years, interquartile range [IQR] = 2.0-4.0). We assessed their relation to concurrent clinical, imaging, and treatment parameters and to future clinical outcomes. RESULTS sNfL levels were higher in both MS cohorts than in HC (p < 0.001). We found a strong association between CSF NfL and sNfL (β = 0.589, p < 0.001). Patients with either brain or spinal (43.4pg/ml, IQR = 25.2-65.3) or both brain and spinal gadolinium-enhancing lesions (62.5pg/ml, IQR = 42.7-71.4) had higher sNfL than those without (29.6pg/ml, IQR = 20.9-41.8; β = 1.461, p = 0.005 and β = 1.902, p = 0.002, respectively). sNfL was independently associated with Expanded Disability Status Scale (EDSS) assessments (β = 1.105, p < 0.001) and presence of relapses (β = 1.430, p < 0.001). sNfL levels were lower under disease-modifying treatment (β = 0.818, p = 0.003). Patients with sNfL levels above the 80th, 90th, 95th, 97.5th, and 99th HC-based percentiles had higher risk of relapses (97.5th percentile: incidence rate ratio = 1.94, 95% confidence interval [CI] = 1.21-3.10, p = 0.006) and EDSS worsening (97.5th percentile: OR = 2.41, 95% CI = 1.07-5.42, p = 0.034). INTERPRETATION These results support the value of sNfL as a sensitive and clinically meaningful blood biomarker to monitor tissue damage and the effects of therapies in MS. Ann Neurol 2017;81:857-870.

Proceedings ArticleDOI
13 Jun 2015
TL;DR: This work argues that the conventional concept of processing-in-memory (PIM) can be a viable solution to achieve memory-capacity-proportional performance and designs a programmable PIM accelerator for large-scale graph processing called Tesseract.
Abstract: The explosion of digital data and the ever-growing need for fast data analysis have made in-memory big-data processing in computer systems increasingly important. In particular, large-scale graph processing is gaining attention due to its broad applicability from social science to machine learning. However, scalable hardware design that can efficiently process large graphs in main memory is still an open problem. Ideally, cost-effective and scalable graph processing systems can be realized by building a system whose performance increases proportionally with the sizes of graphs that can be stored in the system, which is extremely challenging in conventional systems due to severe memory bandwidth limitations. In this work, we argue that the conventional concept of processing-in-memory (PIM) can be a viable solution to achieve such an objective. The key modern enabler for PIM is the recent advancement of the 3D integration technology that facilitates stacking logic and memory dies in a single package, which was not available when the PIM concept was originally examined. In order to take advantage of such a new technology to enable memory-capacity-proportional performance, we design a programmable PIM accelerator for large-scale graph processing called Tesseract. Tesseract is composed of (1) a new hardware architecture that fully utilizes the available memory bandwidth, (2) an efficient method of communication between different memory partitions, and (3) a programming interface that reflects and exploits the unique hardware design. It also includes two hardware prefetchers specialized for memory access patterns of graph processing, which operate based on the hints provided by our programming model. Our comprehensive evaluations using five state-of-the-art graph processing workloads with large real-world graphs show that the proposed architecture improves average system performance by a factor of ten and achieves 87% average energy reduction over conventional systems.

Proceedings Article
01 Feb 2018
TL;DR: In this article, the authors identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.
Abstract: We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.

Posted Content
TL;DR: A remarkably simple metalearning algorithm called Reptile, which learns a parameter initialization that can be fine-tuned quickly on a new task, making it more suitable for optimization problems where many update steps are required.
Abstract: This paper considers metalearning problems, where there is a distribution of tasks, and we would like to obtain an agent that performs well (i.e., learns quickly) when presented with a previously unseen task sampled from this distribution. We present a remarkably simple metalearning algorithm called Reptile, which learns a parameter initialization that can be fine-tuned quickly on a new task. Reptile works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task. Unlike MAML, which also learns an initialization, Reptile doesn't require differentiating through the optimization process, making it more suitable for optimization problems where many update steps are required. We show that Reptile performs well on some well-established benchmarks for few-shot classification. We provide some theoretical analysis aimed at understanding why Reptile works.

Journal ArticleDOI
TL;DR: This work mixed green quantum-dot-containing mesoporous silica nanocomposites with red PQDs, which can prevent the anion-exchange effect and increase thermal and photo stability, and applied the new PQD-based LEDs for backlight displays.
Abstract: All-inorganic CsPbX3 (X=I, Br, Cl) perovskite quantum dots (PQDs) have been investigated because of their optical properties, such as tunable wavelength, narrow band, and high quantum efficiency. These features have been used in light emitting diode (LED) devices. LED on-chip fabrication uses mixed green and red quantum dots with silicone gel. However, the ion-exchange effect widens the narrow emission spectrum. Quantum dots cannot be mixed because of anion exchange. We address this issue with a mesoporous PQD nanocomposite that can prevent ion exchange and increase stability. We mixed green quantum-dot-containing mesoporous silica nanocomposites with red PQDs, which can prevent the anion-exchange effect and increase thermal and photo stability. We applied the new PQD-based LEDs for backlight displays. We also used PQDs in an on-chip LED device. Our white LED device for backlight display passed through a color filter with an NTSC value of 113 % and Rec. 2020 of 85 %.

Journal ArticleDOI
TL;DR: The Common Core State Standards for Mathematics (CCSSM) was published in 2010 and includes a complete collection of standards that are published and reviewed as a ‘common core’ in which math skills have been extensively adopted as discussed by the authors.
Abstract: The Common Core State Standards for Mathematics (CCSSM) was published in 2010 and includes a complete collection of standards that are published and reviewed as a ‘common core’ in which math skills have been extensively adopted. The recommendations provided have been entirely or partially adapted by more than 47 states of the US. Authorities have commited and incredible amount of time, money and resources in creating these new standards and additional effort will be required to implement these standards The new math standards address two established issues in US education, the ordinary quality of mathematics learning and equal opportunity in U.S. schools. It is a fact that deprived students are most likely to have inexperienced or under qualified teachers, and children from impoverished families are much less likely to have the same kind of supports or enrichment opportunities than their more fortunate peers. It is important for the authorities to produce and adapt material for the development of children in such a way that it can clearly address the content and practice of math for the CCSSM and this material should be able to give learning and teaching methods which are in line with CCSSM. It is concluded from this research that there are challenges that have emerged for implementation of CCSSM in which basic challenges include issues of quality, equality, challenges for math teachers, and teaching CCSSM to disabled students.

Journal ArticleDOI
29 Mar 2018-Nature
TL;DR: A magnetoencephalography system that can be worn like a helmet, allowing free and natural movement during scanning, with myriad applications such as characterization of the neurodevelopmental connectome, imaging subjects moving naturally in a virtual environment and investigating the pathophysiology of movement disorders.
Abstract: Imaging human brain function with techniques such as magnetoencephalography typically requires a subject to perform tasks while their head remains still within a restrictive scanner. This artificial environment makes the technique inaccessible to many people, and limits the experimental questions that can be addressed. For example, it has been difficult to apply neuroimaging to investigation of the neural substrates of cognitive development in babies and children, or to study processes in adults that require unconstrained head movement (such as spatial navigation). Here we describe a magnetoencephalography system that can be worn like a helmet, allowing free and natural movement during scanning. This is possible owing to the integration of quantum sensors, which do not rely on superconducting technology, with a system for nulling background magnetic fields. We demonstrate human electrophysiological measurement at millisecond resolution while subjects make natural movements, including head nodding, stretching, drinking and playing a ball game. Our results compare well to those of the current state-of-the-art, even when subjects make large head movements. The system opens up new possibilities for scanning any subject or patient group, with myriad applications such as characterization of the neurodevelopmental connectome, imaging subjects moving naturally in a virtual environment and investigating the pathophysiology of movement disorders.

Journal ArticleDOI
30 Sep 2017
TL;DR: In this article, the authors describe snowball sampling as a purposeful method of data collection in qualitative research, which can be applied to facilitate scientific research, provide community-based data, and hold health educational programs.
Abstract: Background and Objectives: Snowball sampling is applied when samples with the target characteristics are not easily accessible. This research describes snowball sampling as a purposeful method of data collection in qualitative research. Methods: This paper is a descriptive review of previous research papers. Data were gathered using English keywords, including “review,” “declaration,” “snowball,” and “chain referral,” as well as Persian keywords that are equivalents of the following: “purposeful sampling,” “snowball,” “qualitative research,” and “descriptive review.” The databases included Google Scholar, Scopus, Irandoc, ProQuest, Science Direct, SID, MagIran, Medline, and Cochrane. The search was limited to Persian and English articles written between 2005 and 2013. Results: The preliminary search yielded 433 articles from PubMed, 88 articles from Scopus, 1 article from SID, and 18 articles from MagIran. Among 125 articles, methodological and non-research articles were omitted. Finally, 11 relevant articles, which met the criteria, were selected for review. Conclusions: Different methods of snowball sampling can be applied to facilitate scientific research, provide community-based data, and hold health educational programs. Snowball sampling can be effectively used to analyze vulnerable groups or individuals under special care. In fact, it allows researchers to access susceptible populations. Thus, it is suggested to consider snowball sampling strategies while working with the attendees of educational programs or samples of research studies. Keywords: Purposeful Sampling, Snowball, Qualitative Research, Descriptive Review

Journal ArticleDOI
01 Apr 2022-Science
TL;DR: The T2T-CHM13-T2T Consortium presented a complete 3.055 billion-base pair sequence of a human genome, including gapless assemblies for all chromosomes except Y, corrected errors in the prior references, and introduced nearly 200 million base pairs of sequence containing gene predictions, 99 of which are predicted to be protein coding as discussed by the authors .
Abstract: Since its initial release in 2000, the human reference genome has covered only the euchromatic fraction of the genome, leaving important heterochromatic regions unfinished. Addressing the remaining 8% of the genome, the Telomere-to-Telomere (T2T) Consortium presents a complete 3.055 billion-base pair sequence of a human genome, T2T-CHM13, that includes gapless assemblies for all chromosomes except Y, corrects errors in the prior references, and introduces nearly 200 million base pairs of sequence containing 1956 gene predictions, 99 of which are predicted to be protein coding. The completed regions include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric chromosomes, unlocking these complex regions of the genome to variational and functional studies.