Institution
University of Texas at Austin
Education•Austin, Texas, United States•
About: University of Texas at Austin is a education organization based out in Austin, Texas, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 94352 authors who have published 206297 publications receiving 9070052 citations. The organization is also known as: UT-Austin & UT Austin.
Topics: Population, Poison control, Galaxy, Stars, Finite element method
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.
40,609 citations
••
TL;DR: A nonlinear (nonconvex) programming model provides a new definition of efficiency for use in evaluating activities of not-for-profit entities participating in public programs and methods for objectively determining weights by reference to the observational data for the multiple outputs and multiple inputs that characterize such programs.
25,433 citations
••
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."
14,941 citations
••
TL;DR: The research strategy of theory building from cases, particularly multiple cases, involves using one or more cases to create theoretical constructs, propositions, and/or midrange theory from case-based, empirical evidence.
Abstract: This article discusses the research strategy of theory building from cases, particularly multiple cases. Such a strategy involves using one or more cases to create theoretical constructs, propositions, and/or midrange theory from case-based, empirical evidence. Replication logic means that each case serves as a distinct experiment that stands on its own merits as an analytic unit. The frequent use of case studies as a research strategy has given rise to some challenges that can be mitigated by the use of very precise wording and thoughtful research design.
13,581 citations
••
University of Jyväskylä1, University of California, Los Angeles2, California Polytechnic State University3, Los Alamos National Laboratory4, National Research University – Higher School of Economics5, University of California, Berkeley6, University of Birmingham7, Australian Nuclear Science and Technology Organisation8, University of Washington9, University of Massachusetts Amherst10, University of West Bohemia11, Brigham Young University12, University of Texas at Austin13, Universidade Federal de Minas Gerais14, Google15
TL;DR: SciPy as discussed by the authors is an open source scientific computing library for the Python programming language, which includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics.
Abstract: SciPy is an open source scientific computing library for the Python programming language. SciPy 1.0 was released in late 2017, about 16 years after the original version 0.1 release. SciPy has become a de facto standard for leveraging scientific algorithms in the Python programming language, with more than 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories, and millions of downloads per year. This includes usage of SciPy in almost half of all machine learning projects on GitHub, and usage by high profile projects including LIGO gravitational wave analysis and creation of the first-ever image of a black hole (M87). The library includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics. In this work, we provide an overview of the capabilities and development practices of the SciPy library and highlight some recent technical developments.
12,774 citations
Authors
Showing all 95138 results
Name | H-index | Papers | Citations |
---|---|---|---|
George M. Whitesides | 240 | 1739 | 269833 |
Eugene Braunwald | 230 | 1711 | 264576 |
Yi Chen | 217 | 4342 | 293080 |
Robert J. Lefkowitz | 214 | 860 | 147995 |
Joseph L. Goldstein | 207 | 556 | 149527 |
Eric N. Olson | 206 | 814 | 144586 |
Hagop M. Kantarjian | 204 | 3708 | 210208 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Francis S. Collins | 196 | 743 | 250787 |
Gordon B. Mills | 187 | 1273 | 186451 |
Scott M. Grundy | 187 | 841 | 231821 |
Michael S. Brown | 185 | 422 | 123723 |
Eric Boerwinkle | 183 | 1321 | 170971 |
Aaron R. Folsom | 181 | 1118 | 134044 |
Jiaguo Yu | 178 | 730 | 113300 |