scispace - formally typeset
Search or ask a question
Institution

University of Texas at Austin

EducationAustin, Texas, United States
About: University of Texas at Austin is a education organization based out in Austin, Texas, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 94352 authors who have published 206297 publications receiving 9070052 citations. The organization is also known as: UT-Austin & UT Austin.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper studies the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets using a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration.

985 citations

Journal ArticleDOI
TL;DR: This work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system, with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited.
Abstract: Statistical classification of byperspectral data is challenging because the inputs are high in dimension and represent multiple classes that are sometimes quite mixed, while the amount and quality of ground truth in the form of labeled data is typically limited. The resulting classifiers are often unstable and have poor generalization. This work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system, with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited. A new classifier is proposed that incorporates bagging of training samples and adaptive random subspace feature selection within a binary hierarchical classifier (BHC), such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data. Results are compared to a random forest implementation based on the framework of classification and regression trees. For both methods, classification results obtained from experiments on data acquired by the National Aeronautics and Space Administration (NASA) Airborne Visible/Infrared Imaging Spectrometer instrument over the Kennedy Space Center, Florida, and by Hyperion on the NASA Earth Observing 1 satellite over the Okavango Delta of Botswana are superior to those from the original best basis BHC algorithm and a random subspace extension of the BHC.

984 citations

Proceedings Article
01 Nov 2017
TL;DR: A sensitivity analysis of one-layer CNNs is conducted to explore the effect of architecture components on model performance; the aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification.
Abstract: Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (Kim, 2014; Kalchbrenner et al., 2014; Johnson and Zhang, 2014; Zhang et al., 2016). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings.

984 citations

Journal ArticleDOI
TL;DR: In this article, a σ-model calculation in the mixed Dirichlet-Neumann theory was performed and it was shown that the action of the Dirac-Born-Infeld type is restricted to (26−k)-dimensions.
Abstract: In a recent paper we discussed, among other things, the effect of duality transformations on the open bosonic string theory. Namely, we conjectured that a macroscopic object, the D-brane, effectively interacts with closed and open string modes in the low energy limit. This conjecture was supported by a string scattering amplitude calculation. In this letter we report on a σ-model calculation in the mixed Dirichlet-Neumann theory. We find an action of the Dirac-Born-Infeld type restricted to (26−k)-dimensions.

983 citations

Journal ArticleDOI
TL;DR: Once the authors have chosen to compare vision as a ratio using a reference visual angle (20/20), a geometric progression results and a geometric mean must be calculated for a meaningful result.
Abstract: C alculating the average visual acuity and standard deviation on a series of patients is not difficult , but has been done incorrectly in most studies. 1 The basic problem relates to the difference between the arithmetic and geometric mean for a set of numbers. For the correct average visual acuity, the geometric mean must be used, which gives significantly different values than the arithmetic mean. Modern visual acuity charts are designed so that the letter sizes on each line follow a geometric progression (ie, change in a uniform step on a logarithmic scale). 2-4 The accepted step size has been chosen to be 0.1 log unit steps, which is equivalent to letter sizes changing by a factor of 1.2589 between lines. This standard gave rise to the LogMAR (log of the minimum angle of resolution) notation, as shown in Table 1. A geometric progression of lines on the visual acuity chart was chosen because it parallels the way our visual system functions. If patient #1 has a visual acuity of 20/20 and patient #2 has a visual acuity of 20/40, we conclude that patient #1 has two times better visual acuity than patient #2 because he or she can recognize a letter twice as small. Once we have chosen to compare vision as a ratio using a reference visual angle (20/20), a geometric progression results and a geometric mean must be calculated for a meaningful result. Notice in Table 1 that the only values that increase linearly are the line numbers and the LogMar notation. The Snellen acuity, decimal acuity, and visual angle all increase by the geometric factor of 1.2589. Once we decide that equal steps in visual acuity measurement are geometric and not arithmetic , we must use the appropriate geometric mean to compute the correct average (Figure). In Table 1 and the Figure, we see that line 0 is the 20/20 Snellen acuity that corresponds to the LogMAR value zero, since 20/20 is the standard. We also see that line 10 is the 20/200 Snellen visual acuity that corresponds to a LogMAR value of +1.00 (ten times or 1 log unit worse than 20/20). Intuitively, it would appear that halfway between line 0 and line 10 would be line 5, or 20/63. This is the correct average, because geometrically it is halfway between 20/200 and 20/20. The two incorrect methods would be to take the arithmetic …

982 citations


Authors

Showing all 95138 results

NameH-indexPapersCitations
George M. Whitesides2401739269833
Eugene Braunwald2301711264576
Yi Chen2174342293080
Robert J. Lefkowitz214860147995
Joseph L. Goldstein207556149527
Eric N. Olson206814144586
Hagop M. Kantarjian2043708210208
Rakesh K. Jain2001467177727
Francis S. Collins196743250787
Gordon B. Mills1871273186451
Scott M. Grundy187841231821
Michael S. Brown185422123723
Eric Boerwinkle1831321170971
Aaron R. Folsom1811118134044
Jiaguo Yu178730113300
Network Information
Related Institutions (5)
Stanford University
320.3K papers, 21.8M citations

97% related

Columbia University
224K papers, 12.8M citations

96% related

University of California, San Diego
204.5K papers, 12.3M citations

96% related

University of Michigan
342.3K papers, 17.6M citations

96% related

University of Washington
305.5K papers, 17.7M citations

95% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023304
20221,209
202110,137
202010,331
20199,727
20188,973