Institution
University of Toronto
Education•Toronto, Ontario, Canada•
About: University of Toronto is a education organization based out in Toronto, Ontario, Canada. It is known for research contribution in the topics: Population & Health care. The organization has 126067 authors who have published 294940 publications receiving 13536856 citations. The organization is also known as: UToronto & U of T.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking is reviewed.
2,222 citations
•
15 Apr 2009TL;DR: A new learning algorithm for Boltzmann machines that contain many layers of hidden variables that is made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass.
Abstract: We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks
2,221 citations
••
Monash University1, University of Ottawa2, University of Amsterdam3, University of Paris4, Bond University5, University of Texas Health Science Center at San Antonio6, American University of Beirut7, Oregon Health & Science University8, University of York9, Ottawa Hospital Research Institute10, University of Southern Denmark11, Johns Hopkins University12, Brigham and Women's Hospital13, Indiana University14, University of Bristol15, University College London16, University of Toronto17
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA 2020) as mentioned in this paper was developed to facilitate transparent and complete reporting of systematic reviews, and has been updated to reflect recent advances in systematic review methodology and terminology.
Abstract: The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.
2,217 citations
•
06 Jul 2015TL;DR: In this paper, an encoder LSTM is used to map an input video sequence into a fixed length representation, which is then decoded using single or multiple decoder Long Short Term Memory (LSTM) networks to perform different tasks.
Abstract: We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.
2,217 citations
••
TL;DR: This article analyzed the capital structure choices of firms in 10 developing countries and provided evidence that these decisions are affected by the same variables as in developed countries, indicating that specific country factors are at work.
Abstract: This study uses a new data set to assess whether capital structure theory is portable across countries with different institutional structures. We analyze capital structure choices of firms in 10 developing countries, and provide evidence that these decisions are affected by the same variables as in developed countries. However, there are persistent differences across countries, indicating that specific country factors are at work. Our findings suggest that although some of the insights from modern finance theory are portable across countries, much remains to be done to understand the impact of different institutional features on capital structure choices. OUR KNOWLEDGE OF CAPITAL STRUCTURES has mostly been derived from data from developed economies that have many institutional similarities. The purpose of this paper is to analyze the capital structure choices made by companies from developing countries that have different institutional structures. The prevailing view, for example Mayer ~1990!, seems to be that financial decisions in developing countries are somehow different. Mayer is the most recent researcher to use aggregate f low of funds data to differentiate between financial systems based on the “Anglo-Saxon” capital markets model and those based on a “Continental-German-Japanese” banking model. However, because Mayer’s data comes from aggregate f low of funds data and not from individual firms, there is a problem with this approach. The differences between private, public, and foreign ownership structures have a profound inf luence on such data, but the differences may tell us little about how profit-oriented firms make their individual financial decisions. This paper uses a new firm-level database to examine the financial structures of firms in a sample of 10 developing countries. Thus, this study helps determine whether the stylized facts we have learned from studies of developed countries apply only to these markets, or whether they have more general applicability. Our focus is on answering three questions:
2,215 citations
Authors
Showing all 127245 results
Name | H-index | Papers | Citations |
---|---|---|---|
Gordon H. Guyatt | 231 | 1620 | 228631 |
David J. Hunter | 213 | 1836 | 207050 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Thomas C. Südhof | 191 | 653 | 118007 |
Gordon B. Mills | 187 | 1273 | 186451 |
George Efstathiou | 187 | 637 | 156228 |
John P. A. Ioannidis | 185 | 1311 | 193612 |
Paul M. Thompson | 183 | 2271 | 146736 |
Yusuke Nakamura | 179 | 2076 | 160313 |
Chris Sander | 178 | 713 | 233287 |
David R. Williams | 178 | 2034 | 138789 |
David L. Kaplan | 177 | 1944 | 146082 |
Jasvinder A. Singh | 176 | 2382 | 223370 |
Hyun-Chul Kim | 176 | 4076 | 183227 |
Deborah J. Cook | 173 | 907 | 148928 |