Education•Chapel Hill, North Carolina, United States•
About: University of North Carolina at Chapel Hill is a education organization based out in Chapel Hill, North Carolina, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 81393 authors who have published 185327 publications receiving 9948508 citations. The organization is also known as: University of North Carolina & North Carolina.
Papers published on a yearly basis
•01 Dec 1969
TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Abstract: Contents: Prefaces. The Concepts of Power Analysis. The t-Test for Means. The Significance of a Product Moment rs (subscript s). Differences Between Correlation Coefficients. The Test That a Proportion is .50 and the Sign Test. Differences Between Proportions. Chi-Square Tests for Goodness of Fit and Contingency Tables. The Analysis of Variance and Covariance. Multiple Regression and Correlation Analysis. Set Correlation and Multivariate Methods. Some Issues in Power Analysis. Computational Procedures.
TL;DR: Numerical calculations on a number of atoms, positive ions, and molecules, of both open- and closed-shell type, show that density-functional formulas for the correlation energy and correlation potential give correlation energies within a few percent.
Abstract: A correlation-energy formula due to Colle and Salvetti [Theor. Chim. Acta 37, 329 (1975)], in which the correlation energy density is expressed in terms of the electron density and a Laplacian of the second-order Hartree-Fock density matrix, is restated as a formula involving the density and local kinetic-energy density. On insertion of gradient expansions for the local kinetic-energy density, density-functional formulas for the correlation energy and correlation potential are then obtained. Through numerical calculations on a number of atoms, positive ions, and molecules, of both open- and closed-shell type, it is demonstrated that these formulas, like the original Colle-Salvetti formulas, give correlation energies within a few percent.
••07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.
TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Abstract: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
••08 Oct 2016
TL;DR: The approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, which makes SSD easy to train and straightforward to integrate into systems that require a detection component.
Abstract: We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For \(300 \times 300\) input, SSD achieves 74.3 % mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for \(512 \times 512\) input, SSD achieves 76.9 % mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https://github.com/weiliu89/caffe/tree/ssd.
Showing all 82249 results
|Walter C. Willett
|David J. Hunter
|Irving L. Weissman
|Eric J. Topol
|Dennis W. Dickson
|Scott M. Grundy
|Patrick O. Brown
|Alan C. Evans
|Anil K. Jain
|Terrie E. Moffitt
|Aaron R. Folsom
Related Institutions (5)
University of Washington
305.5K papers, 17.7M citations
220.6K papers, 12.8M citations
University of Pennsylvania
257.6K papers, 14.1M citations
224K papers, 12.8M citations
530.3K papers, 38.1M citations