scispace - formally typeset
Search or ask a question

Showing papers by "University of Toronto published in 2015"


Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations


Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations


Journal ArticleDOI
TL;DR: This document provides updated normal values for all four cardiac chambers, including three-dimensional echocardiography and myocardial deformation, when possible, on the basis of considerably larger numbers of normal subjects, compiled from multiple databases.
Abstract: The rapid technological developments of the past decade and the changes in echocardiographic practice brought about by these developments have resulted in the need for updated recommendations to the previously published guidelines for cardiac chamber quantification, which was the goal of the joint writing group assembled by the American Society of Echocardiography and the European Association of Cardiovascular Imaging. This document provides updated normal values for all four cardiac chambers, including three-dimensional echocardiography and myocardial deformation, when possible, on the basis of considerably larger numbers of normal subjects, compiled from multiple databases. In addition, this document attempts to eliminate several minor discrepancies that existed between previously published guidelines.

11,568 citations


Proceedings Article
06 Jul 2015
TL;DR: An attention based model that automatically learns to describe the content of images is introduced that can be trained in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound.
Abstract: Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.

6,485 citations


Journal ArticleDOI
TL;DR: The use of reliability and validity are common in quantitative research and now it is reconsidered in the qualitative research paradigm as discussed by the authors, which can also illuminate some ways to test or maximize the validity and reliability of a qualitative study.
Abstract: The use of reliability and validity are common in quantitative research and now it is reconsidered in the qualitative research paradigm. Since reliability and validity are rooted in positivist perspective then they should be redefined for their use in a naturalistic approach. Like reliability and validity as used in quantitative research are providing springboard to examine what these two terms mean in the qualitative research paradigm, triangulation as used in quantitative research to test the reliability and validity can also illuminate some ways to test or maximize the validity and reliability of a qualitative study. Therefore, reliability, validity and triangulation, if they are relevant research concepts, particularly from a qualitative point of view, have to be redefined in order to reflect the multiple ways of establishing truth. Key words: Reliability, Validity, Triangulation, Construct, Qualitative, and Quantitative This article discusses the use of reliability and validity in the qualitative research paradigm. First, the meanings of quantitative and qualitative research are discussed. Secondly, reliability and validity as used in quantitative research are discussed as a way of providing a springboard to examining what these two terms mean and how they can be tested in the qualitative research paradigm. This paper concludes by drawing upon the use of triangulation in the two paradigms (quantitative and qualitative) to show how the changes have influenced our understanding of reliability, validity and triangulation in qualitative studies.

6,438 citations


Journal ArticleDOI
Mohsen Naghavi1, Haidong Wang1, Rafael Lozano1, Adrian Davis2  +728 moreInstitutions (294)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as discussed by the authors, the authors used the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data.

5,792 citations


Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as discussed by the authors provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

5,668 citations


Journal ArticleDOI
Theo Vos1, Ryan M Barber1, Brad Bell1, Amelia Bertozzi-Villa1  +686 moreInstitutions (287)
TL;DR: In the Global Burden of Disease Study 2013 (GBD 2013) as mentioned in this paper, the authors estimated the quantities for acute and chronic diseases and injuries for 188 countries between 1990 and 2013.

4,510 citations


Journal ArticleDOI
TL;DR: In patients receiving intravenous t-PA for acute ischemic stroke, thrombectomy with the use of a stent retriever within 6 hours after onset improved functional outcomes at 90 days.
Abstract: BACKGROUND Among patients with acute ischemic stroke due to occlusions in the proximal anterior intracranial circulation, less than 40% regain functional independence when treated with intravenous tissue plasminogen activator (t-PA) alone. Thrombectomy with the use of a stent retriever, in addition to intravenous t-PA, increases reperfusion rates and may improve long-term functional outcome. METHODS We randomly assigned eligible patients with stroke who were receiving or had received intravenous t-PA to continue with t-PA alone (control group) or to undergo endovascular thrombectomy with the use of a stent retriever within 6 hours after symptom onset (intervention group). Patients had confirmed occlusions in the proximal anterior intracranial circulation and an absence of large ischemic-core lesions. The primary outcome was the severity of global disability at 90 days, as assessed by means of the modified Rankin scale (with scores ranging from 0 [no symptoms] to 6 [death]). RESULTS The study was stopped early because of efficacy. At 39 centers, 196 patients underwent randomization (98 patients in each group). In the intervention group, the median time from qualifying imaging to groin puncture was 57 minutes, and the rate of substantial reperfusion at the end of the procedure was 88%. Thrombectomy with the stent retriever plus intravenous t-PA reduced disability at 90 days over the entire range of scores on the modified Rankin scale (P<0.001). The rate of functional independence (modified Rankin scale score, 0 to 2) was higher in the intervention group than in the control group (60% vs. 35%, P<0.001). There were no significant between-group differences in 90-day mortality (9% vs. 12%, P = 0.50) or symptomatic intracranial hemorrhage (0% vs. 3%, P = 0.12). CONCLUSIONS In patients receiving intravenous t-PA for acute ischemic stroke due to occlusions in the proximal anterior intracranial circulation, thrombectomy with a stent retriever within 6 hours after onset improved functional outcomes at 90 days. (Funded by Covidien; SWIFT PRIME ClinicalTrials.gov number, NCT01657461.)

4,101 citations


Journal ArticleDOI
30 Jan 2015-Science
TL;DR: An antisolvent vapor-assisted crystallization approach is reported that enables us to create sizable crack-free MAPbX3 single crystals with volumes exceeding 100 cubic millimeters, which enabled a detailed characterization of their optical and charge transport characteristics.
Abstract: The fundamental properties and ultimate performance limits of organolead trihalide MAPbX3 (MA = CH3NH3(+); X = Br(-) or I(-)) perovskites remain obscured by extensive disorder in polycrystalline MAPbX3 films. We report an antisolvent vapor-assisted crystallization approach that enables us to create sizable crack-free MAPbX3 single crystals with volumes exceeding 100 cubic millimeters. These large single crystals enabled a detailed characterization of their optical and charge transport characteristics. We observed exceptionally low trap-state densities on the order of 10(9) to 10(10) per cubic centimeter in MAPbX3 single crystals (comparable to the best photovoltaic-quality silicon) and charge carrier diffusion lengths exceeding 10 micrometers. These results were validated with density functional theory calculations.

3,939 citations


Journal ArticleDOI
TL;DR: The process of developing specific advice for the reporting of systematic reviews that incorporate network meta-analyses is described, and the guidance generated from this process is presented.
Abstract: The PRISMA statement is a reporting guideline designed to improve the completeness of reporting of systematic reviews and meta-analyses. Authors have used this guideline worldwide to prepare their reviews for publication. In the past, these reports typically compared 2 treatment alternatives. With the evolution of systematic reviews that compare multiple treatments, some of them only indirectly, authors face novel challenges for conducting and reporting their reviews. This extension of the PRISMA (Preferred Reporting Items for Systematic Reviews and Metaanalyses) statement was developed specifically to improve the reporting of systematic reviews incorporating network meta-analyses.

Journal ArticleDOI
29 Jan 2015-Nature
TL;DR: It is shown that human-papillomavirus-associated tumours are dominated by helical domain mutations of the oncogene PIK3CA, novel alterations involving loss of TRAF3, and amplification of the cell cycle gene E2F1.
Abstract: The Cancer Genome Atlas profiled 279 head and neck squamous cell carcinomas (HNSCCs) to provide a comprehensive landscape of somatic genomic alterations Here we show that human-papillomavirus-associated tumours are dominated by helical domain mutations of the oncogene PIK3CA, novel alterations involving loss of TRAF3, and amplification of the cell cycle gene E2F1 Smoking-related HNSCCs demonstrate near universal loss-of-function TP53 mutations and CDKN2A inactivation with frequent copy number alterations including amplification of 3q26/28 and 11q13/22 A subgroup of oral cavity tumours with favourable clinical outcomes displayed infrequent copy number alterations in conjunction with activating mutations of HRAS or PIK3CA, coupled with inactivating mutations of CASP8, NOTCH1 and TP53 Other distinct subgroups contained loss-of-function alterations of the chromatin modifier NSD1, WNT pathway genes AJUBA and FAT1, and activation of oxidative stress factor NFE2L2, mainly in laryngeal tumours Therapeutic candidate alterations were identified in most HNSCCs

Journal ArticleDOI
TL;DR: This document contains the checklist and explanatory and elaboration information to enhance the use of theRECORD checklist, and examples of good reporting for each RECORD checklist item are also included herein.
Abstract: Routinely collected health data, obtained for administrative and clinical purposes without specific a priori research goals, are increasingly used for research. The rapid evolution and availability of these data have revealed issues not addressed by existing reporting guidelines, such as Strengthening the Reporting of Observational Studies in Epidemiology (STROBE). The REporting of studies Conducted using Observational Routinely collected health Data (RECORD) statement was created to fill these gaps. RECORD was created as an extension to the STROBE statement to address reporting items specific to observational studies using routinely collected health data. RECORD consists of a checklist of 13 items related to the title, abstract, introduction, methods, results, and discussion section of articles, and other information required for inclusion in such research reports. This document contains the checklist and explanatory and elaboration information to enhance the use of the checklist. Examples of good reporting for each RECORD checklist item are also included herein. This document, as well as the accompanying website and message board (http://www.record-statement.org), will enhance the implementation and understanding of RECORD. Through implementation of RECORD, authors, journals editors, and peer reviewers can encourage transparency of research reporting.


01 Jan 2015
TL;DR: A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.
Abstract: The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. In this paper, we explore a method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs. Once a network has been tuned, we can then capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions. Using a convolutional architecture, we are able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

Journal ArticleDOI
TL;DR: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrogram, and a novel optical interferometer.
Abstract: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrograph, and a novel optical interferometer. All the data from SDSS-III are now made public. In particular, this paper describes Data Release 11 (DR11) including all data acquired through 2013 July, and Data Release 12 (DR12) adding data acquired through 2014 July (including all data included in previous data releases), marking the end of SDSS-III observing. Relative to our previous public release (DR10), DR12 adds one million new spectra of galaxies and quasars from the Baryon Oscillation Spectroscopic Survey (BOSS) over an additional 3000 sq. deg of sky, more than triples the number of H-band spectra of stars as part of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE), and includes repeated accurate radial velocity measurements of 5500 stars from the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS). The APOGEE outputs now include measured abundances of 15 different elements for each star. In total, SDSS-III added 2350 sq. deg of ugriz imaging; 155,520 spectra of 138,099 stars as part of the Sloan Exploration of Galactic Understanding and Evolution 2 (SEGUE-2) survey; 2,497,484 BOSS spectra of 1,372,737 galaxies, 294,512 quasars, and 247,216 stars over 9376 sq. deg; 618,080 APOGEE spectra of 156,593 stars; and 197,040 MARVELS spectra of 5,513 stars. Since its first light in 1998, SDSS has imaged over 1/3 of the Celestial sphere in five bands and obtained over five million astronomical spectra.

Journal ArticleDOI
Christina Fitzmaurice1, Christina Fitzmaurice2, Daniel Dicker2, Daniel Dicker1, Amanda W Pain1, Hannah Hamavid1, Maziar Moradi-Lakeh1, Michael F. MacIntyre3, Michael F. MacIntyre1, Christine Allen1, Gillian M. Hansen1, Rachel Woodbrook1, Charles D.A. Wolfe1, Randah R. Hamadeh4, Ami R. Moore5, A. Werdecker6, Bradford D. Gessner, Braden Te Ao, Brian J. McMahon7, Chante Karimkhani8, Chuanhua Yu9, Graham S Cooke10, David C. Schwebel11, David O. Carpenter12, David M. Pereira13, Denis Nash, Dhruv S. Kazi14, Diego De Leo15, Dietrich Plass16, Kingsley N. Ukwaja17, George D. Thurston, Kim Yun Jin18, Edgar P. Simard19, Edward J Mills20, Eun-Kee Park21, Ferrán Catalá-López22, Gabrielle deVeber, Carolyn C. Gotay23, Gulfaraz Khan24, H. Dean Hosgood25, Itamar S. Santos26, Janet L Leasher27, Jasvinder A. Singh28, James Leigh12, Jost B. Jonas29, Juan R. Sanabria30, Justin Beardsley31, Justin Beardsley32, Kathryn H. Jacobsen33, Ken Takahashi34, Richard C. Franklin, Luca Ronfani35, Marcella Montico36, Luigi Naldi36, Marcello Tonelli, Johanna M. Geleijnse37, Max Petzold38, Mark G. Shrime39, Mark G. Shrime40, Mustafa Z. Younis41, Naohiro Yonemoto42, Nicholas J K Breitborde, Paul S. F. Yip43, Farshad Pourmalek44, Paulo A. Lotufo24, Alireza Esteghamati27, Graeme J. Hankey45, Raghib Ali46, Raimundas Lunevicius33, Reza Malekzadeh47, Robert P. Dellavalle45, Robert G. Weintraub48, Robert G. Weintraub49, Robyn M. Lucas50, Robyn M. Lucas51, Roderick J Hay52, David Rojas-Rueda, Ronny Westerman, Sadaf G. Sepanlou53, Sandra Nolte, Scott B. Patten54, Scott Weichenthal37, Semaw Ferede Abera55, Seyed-Mohammad Fereshtehnejad56, Ivy Shiue57, Tim Driscoll58, Tim Driscoll59, Tommi J. Vasankari29, Ubai Alsharif, Vafa Rahimi-Movaghar54, Vasiliy Victorovich Vlassov45, W. S. Marcenes60, Wubegzier Mekonnen61, Yohannes Adama Melaku62, Yuichiro Yano56, Al Artaman63, Ismael Campos, Jennifer H MacLachlan41, Ulrich O Mueller, Daniel Kim53, Matias Trillini64, Babak Eshrati65, Hywel C Williams66, Kenji Shibuya67, Rakhi Dandona68, Kinnari S. Murthy69, Benjamin C Cowie69, Azmeraw T. Amare, Carl Abelardo T. Antonio70, Carlos A Castañeda-Orjuela71, Coen H. Van Gool, Francesco Saverio Violante, In-Hwan Oh72, Kedede Deribe73, Kjetil Søreide74, Kjetil Søreide62, Luke D. Knibbs75, Luke D. Knibbs76, Maia Kereselidze77, Mark Green78, Rosario Cardenas79, Nobhojit Roy80, Taavi Tillmann57, Yongmei Li81, Hans Krueger82, Lorenzo Monasta24, Subhojit Dey36, Sara Sheikhbahaei, Nima Hafezi-Nejad45, G Anil Kumar45, Chandrashekhar T Sreeramareddy69, Lalit Dandona83, Haidong Wang69, Haidong Wang1, Stein Emil Vollset1, Ali Mokdad84, Ali Mokdad75, Joshua A. Salomon1, Rafael Lozano41, Theo Vos1, Mohammad H. Forouzanfar1, Alan D. Lopez1, Christopher J L Murray51, Mohsen Naghavi1 
Institute for Health Metrics and Evaluation1, University of Washington2, Iran University of Medical Sciences3, King's College London4, Arabian Gulf University5, University of North Texas6, Auckland University of Technology7, Alaska Native Tribal Health Consortium8, Columbia University9, Wuhan University10, Imperial College London11, University of Alabama at Birmingham12, University at Albany, SUNY13, City University of New York14, University of California, San Francisco15, Griffith University16, Environment Agency17, New York University18, Southern University College19, Emory University20, University of Ottawa21, Kosin University22, University of Toronto23, University of British Columbia24, United Arab Emirates University25, Albert Einstein College of Medicine26, University of São Paulo27, Nova Southeastern University28, University of Sydney29, Heidelberg University30, Cancer Treatment Centers of America31, Case Western Reserve University32, University of Oxford33, George Mason University34, James Cook University35, University of Trieste36, University of Calgary37, Wageningen University and Research Centre38, University of Gothenburg39, University of the Witwatersrand40, Harvard University41, Jackson State University42, University of Arizona43, University of Hong Kong44, Tehran University of Medical Sciences45, University of Western Australia46, Aintree University Hospitals NHS Foundation Trust47, University of Colorado Denver48, Veterans Health Administration49, Royal Children's Hospital50, University of Melbourne51, Australian National University52, University of Marburg53, Charité54, Health Canada55, College of Health Sciences, Bahrain56, Karolinska Institutet57, University of Edinburgh58, Northumbria University59, National Research University – Higher School of Economics60, Queen Mary University of London61, Addis Ababa University62, Northwestern University63, Northeastern University64, Mario Negri Institute for Pharmacological Research65, Arak University of Medical Sciences66, University of Nottingham67, University of Tokyo68, Public Health Foundation of India69, University of Groningen70, University of the Philippines Manila71, University of Bologna72, Kyung Hee University73, Brighton and Sussex Medical School74, University of Bergen75, Stavanger University Hospital76, University of Queensland77, National Centre for Disease Control78, University of Sheffield79, Universidad Autónoma Metropolitana80, University College London81, Genentech82, Universiti Tunku Abdul Rahman83, Norwegian Institute of Public Health84
TL;DR: To estimate mortality, incidence, years lived with disability, years of life lost, and disability-adjusted life-years for 28 cancers in 188 countries by sex from 1990 to 2013, the general methodology of the Global Burden of Disease 2013 study was used.
Abstract: Importance Cancer is among the leading causes of death worldwide. Current estimates of cancer burden in individual countries and regions are necessary to inform local cancer control strategies. Objective To estimate mortality, incidence, years lived with disability (YLDs), years of life lost (YLLs), and disability-adjusted life-years (DALYs) for 28 cancers in 188 countries by sex from 1990 to 2013. Evidence Review The general methodology of the Global Burden of Disease (GBD) 2013 study was used. Cancer registries were the source for cancer incidence data as well as mortality incidence (MI) ratios. Sources for cause of death data include vital registration system data, verbal autopsy studies, and other sources. The MI ratios were used to transform incidence data to mortality estimates and cause of death estimates to incidence estimates. Cancer prevalence was estimated using MI ratios as surrogates for survival data; YLDs were calculated by multiplying prevalence estimates with disability weights, which were derived from population-based surveys; YLLs were computed by multiplying the number of estimated cancer deaths at each age with a reference life expectancy; and DALYs were calculated as the sum of YLDs and YLLs. Findings In 2013 there were 14.9 million incident cancer cases, 8.2 million deaths, and 196.3 million DALYs. Prostate cancer was the leading cause for cancer incidence (1.4 million) for men and breast cancer for women (1.8 million). Tracheal, bronchus, and lung (TBL) cancer was the leading cause for cancer death in men and women, with 1.6 million deaths. For men, TBL cancer was the leading cause of DALYs (24.9 million). For women, breast cancer was the leading cause of DALYs (13.1 million). Age-standardized incidence rates (ASIRs) per 100 000 and age-standardized death rates (ASDRs) per 100 000 for both sexes in 2013 were higher in developing vs developed countries for stomach cancer (ASIR, 17 vs 14; ASDR, 15 vs 11), liver cancer (ASIR, 15 vs 7; ASDR, 16 vs 7), esophageal cancer (ASIR, 9 vs 4; ASDR, 9 vs 4), cervical cancer (ASIR, 8 vs 5; ASDR, 4 vs 2), lip and oral cavity cancer (ASIR, 7 vs 6; ASDR, 2 vs 2), and nasopharyngeal cancer (ASIR, 1.5 vs 0.4; ASDR, 1.2 vs 0.3). Between 1990 and 2013, ASIRs for all cancers combined (except nonmelanoma skin cancer and Kaposi sarcoma) increased by more than 10% in 113 countries and decreased by more than 10% in 12 of 188 countries. Conclusions and Relevance Cancer poses a major threat to public health worldwide, and incidence rates have increased in most countries since 1990. The trend is a particular threat to developing nations with health systems that are ill-equipped to deal with complex and expensive cancer treatments. The annual update on the Global Burden of Cancer will provide all stakeholders with timely estimates to guide policy efforts in cancer prevention, screening, treatment, and palliation.

Journal ArticleDOI
11 Dec 2015-Science
TL;DR: A computational model is described that learns in a similar fashion and does so better than current deep learning algorithms and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce.
Abstract: People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

Journal ArticleDOI
TL;DR: This work shows that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery.
Abstract: The binding specificities of RNA- and DNA-binding proteins are determined from experimental data using a ‘deep learning’ approach.

Proceedings Article
06 Jul 2015
TL;DR: In this paper, an encoder LSTM is used to map an input video sequence into a fixed length representation, which is then decoded using single or multiple decoder Long Short Term Memory (LSTM) networks to perform different tasks.
Abstract: We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: The authors align books to their movie releases to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in the current datasets, and propose a context-aware CNN to combine information from multiple sources.
Abstract: Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in the current datasets. To align movies and books we propose a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for.

Journal ArticleDOI
TL;DR: Among patients with type 2 diabetes and established cardiovascular disease, adding sitagliptin to usual care did not appear to increase the risk of major adverse cardiovascular events, hospitalization for heart failure, or other adverse events.
Abstract: Background Data are lacking on the long-term effect on cardiovascular events of adding sitagliptin, a dipeptidyl peptidase 4 inhibitor, to usual care in patients with type 2 diabetes and cardiovascular disease. Methods In this randomized, double-blind study, we assigned 14,671 patients to add either sitagliptin or placebo to their existing therapy. Open-label use of antihyperglycemic therapy was encouraged as required, aimed at reaching individually appropriate glycemic targets in all patients. To determine whether sitagliptin was noninferior to placebo, we used a relative risk of 1.3 as the marginal upper boundary. The primary cardiovascular outcome was a composite of cardiovascular death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for unstable angina. Results During a median follow-up of 3.0 years, there was a small difference in glycated hemoglobin levels (least-squares mean difference for sitagliptin vs. placebo, -0.29 percentage points; 95% confidence interval [CI], -0.32 to -0.27). Overall, the primary outcome occurred in 839 patients in the sitagliptin group (11.4%; 4.06 per 100 person-years) and 851 patients in the placebo group (11.6%; 4.17 per 100 person-years). Sitagliptin was noninferior to placebo for the primary composite cardiovascular outcome (hazard ratio, 0.98; 95% CI, 0.88 to 1.09; P Conclusions Among patients with type 2 diabetes and established cardiovascular disease, adding sitagliptin to usual care did not appear to increase the risk of major adverse cardiovascular events, hospitalization for heart failure, or other adverse events. (Funded by Merck Sharp & Dohme; TECOS ClinicalTrials.gov number, NCT00790205.).

Journal ArticleDOI
TL;DR: An approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR–cas loci into distinct classes, types and subtypes is presented.
Abstract: The evolution of CRISPR-cas loci, which encode adaptive immune systems in archaea and bacteria, involves rapid changes, in particular numerous rearrangements of the locus architecture and horizontal transfer of complete loci or individual modules. These dynamics complicate straightforward phylogenetic classification, but here we present an approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR-cas loci into distinct classes, types and subtypes. The new classification retains the overall structure of the previous version but is expanded to now encompass two classes, five types and 16 subtypes. The relative stability of the classification suggests that the most prevalent variants of CRISPR-Cas systems are already known. However, the existence of rare, currently unclassifiable variants implies that additional types and subtypes remain to be characterized.

Journal ArticleDOI
TL;DR: Current data on the clinical validity and utility of TILs in BC are reviewed in an effort to foster better knowledge and insight in this rapidly evolving field, and to develop a standardized methodology for visual assessment on H&E sections.

Journal ArticleDOI
TL;DR: The Community Earth System Model (CESM) community designed the CESM Large Ensemble with the explicit goal of enabling assessment of climate change in the presence of internal climate variability as discussed by the authors.
Abstract: While internal climate variability is known to affect climate projections, its influence is often underappreciated and confused with model error. Why? In general, modeling centers contribute a small number of realizations to international climate model assessments [e.g., phase 5 of the Coupled Model Intercomparison Project (CMIP5)]. As a result, model error and internal climate variability are difficult, and at times impossible, to disentangle. In response, the Community Earth System Model (CESM) community designed the CESM Large Ensemble (CESM-LE) with the explicit goal of enabling assessment of climate change in the presence of internal climate variability. All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920–2100) 30 times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences. Two companion 1000+-yr-long preindu...

Journal ArticleDOI
TL;DR: These guidelines provide an up-date of previous IFCN report on “Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application” and include some recent extensions and developments.

01 Oct 2015
TL;DR: This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation as mentioned in this paper.
Abstract: This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. The evaluation is available in electronic form from the following Internet URL: http://jpldataeval.jpl.nasa.gov/

Proceedings Article
07 Dec 2015
TL;DR: This article used the continuity of text from books to train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage, which can produce highly generic sentence representations that are robust and perform well in practice.
Abstract: We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice.

Journal ArticleDOI
TL;DR: The Global Burden of Disease, Injuries, and Risk Factor study 2013 (GBD 2013) as mentioned in this paper provides a timely opportunity to update the comparative risk assessment with new data for exposure, relative risks, and evidence on the appropriate counterfactual risk distribution.

Journal ArticleDOI
03 Feb 2015-JAMA
TL;DR: In this article, the effectiveness and safety of transfusing patients with severe trauma and major bleeding using plasma, platelets, and red blood cells in a 1:1:1 ratio compared with a 1 :1:2 ratio was evaluated.
Abstract: Importance Severely injured patients experiencing hemorrhagic shock often require massive transfusion. Earlier transfusion with higher blood product ratios (plasma, platelets, and red blood cells), defined as damage control resuscitation, has been associated with improved outcomes; however, there have been no large multicenter clinical trials. Objective To determine the effectiveness and safety of transfusing patients with severe trauma and major bleeding using plasma, platelets, and red blood cells in a 1:1:1 ratio compared with a 1:1:2 ratio. Design, Setting, and Participants Pragmatic, phase 3, multisite, randomized clinical trial of 680 severely injured patients who arrived at 1 of 12 level I trauma centers in North America directly from the scene and were predicted to require massive transfusion between August 2012 and December 2013. Interventions Blood product ratios of 1:1:1 (338 patients) vs 1:1:2 (342 patients) during active resuscitation in addition to all local standard-of-care interventions (uncontrolled). Main Outcomes and Measures Primary outcomes were 24-hour and 30-day all-cause mortality. Prespecified ancillary outcomes included time to hemostasis, blood product volumes transfused, complications, incidence of surgical procedures, and functional status. Results No significant differences were detected in mortality at 24 hours (12.7% in 1:1:1 group vs 17.0% in 1:1:2 group; difference, −4.2% [95% CI, −9.6% to 1.1%]; P = .12) or at 30 days (22.4% vs 26.1%, respectively; difference, −3.7% [95% CI, −10.2% to 2.7%]; P = .26). Exsanguination, which was the predominant cause of death within the first 24 hours, was significantly decreased in the 1:1:1 group (9.2% vs 14.6% in 1:1:2 group; difference, −5.4% [95% CI, −10.4% to −0.5%]; P = .03). More patients in the 1:1:1 group achieved hemostasis than in the 1:1:2 group (86% vs 78%, respectively; P = .006). Despite the 1:1:1 group receiving more plasma (median of 7 U vs 5 U, P P Conclusions and Relevance Among patients with severe trauma and major bleeding, early administration of plasma, platelets, and red blood cells in a 1:1:1 ratio compared with a 1:1:2 ratio did not result in significant differences in mortality at 24 hours or at 30 days. However, more patients in the 1:1:1 group achieved hemostasis and fewer experienced death due to exsanguination by 24 hours. Even though there was an increased use of plasma and platelets transfused in the 1:1:1 group, no other safety differences were identified between the 2 groups. Trial Registration clinicaltrials.gov Identifier:NCT01545232