scispace - formally typeset
Search or ask a question
Institution

Michigan State University

EducationEast Lansing, Michigan, United States
About: Michigan State University is a education organization based out in East Lansing, Michigan, United States. It is known for research contribution in the topics: Population & Poison control. The organization has 60109 authors who have published 137074 publications receiving 5633022 citations. The organization is also known as: MSU & Michigan State.


Papers
More filters
Journal ArticleDOI
TL;DR: The use of alpha as the basis for corrections for attenuation causes overestimates of true correlation as mentioned in this paper, which may cause significant misinterpretations of measures when alpha is used as evidence that a measure is unidimensional.
Abstract: The article addresses some concerns about how coefficient alpha is reported and used. It also shows that alpha is not a measure of homogeneity or unidimensionality. This fact and the finding that test length is related to reliability may cause significant misinterpretations of measures when alpha is used as evidence that a measure is unidimensional. For multidimensional measures, use of alpha as the basis for corrections for attenuation causes overestimates of true correlation. Satisfactory levels of alpha depend on test use and interpretation. Even relatively low (e.g., .50) levels of criterion reliability do not seriously attenuate validity coefficients. When reporting intercorrelations among measures that should be discriminable, it is important to present observed correlations, appropriate measures of reliability, and correlations corrected for unreliability. Presentation of coefficient alpha (hereinafter alpha; Cronbach, 1951 ) as an index of the internal consistency or reliability of psychological measures has become routine practice in virtually all psychological and social science research in which multiple-item measures of a construct are used. In this article I describe four ways in which researchers' use of alpha to convey information about the operationalization of a construct or constructs can represent a lack of understanding or can convey less information than is actually required to evaluate the degree to which measurement problems are or are not a concern in the interpretation of the research results. In each instance, I will also indicate which additional or supplementary information is necessary to evaluate the measurements used in the research.

2,283 citations

Journal ArticleDOI
TL;DR: In this article, the FIGARCH (Fractionally Integrated Generalized AutoRegressive Conditionally Heteroskedastic) process is introduced and the conditional variance of the process implies a slow hyperbolic rate of decay for the influence of lagged squared innovations.

2,274 citations

Journal ArticleDOI
TL;DR: In this article, the authors derived the distributions of the least-squares residuals under a variety of specification errors, including omitted variables, incorrect functional form, simultaneous equation problems and heteroskedasticity.
Abstract: SUMMARY The effects on the distribution of least-squares residuals of a series of model mis-specifications are considered. It is shown that for a variety of specification errors the distributions of the least-squares residuals are normal, but with non-zero means. An alternative predictor of the disturbance vector is used in developing four procedures for testing for the presence of specification error. The specification errors considered are omitted variables, incorrect functional form, simultaneous equation problems and heteroskedasticity. THE objectives of this paper are two. The first is to derive the distributions of the classical linear least-squares residuals under a variety of specification errors. The errors considered are omitted variables, incorrect functional form, simultaneous equation problems and heteroskedasticity. It is assumed that the disturbance terms are independently and normally distributed. It will be shown that the effect of the specification errors considered above is, with the exception of the error of heteroskedasticity, to yield residuals which though normally distributed do not have zero means, so that the distribution of the squared residuals is non-central x2. The second objective is to derive procedures to test for the presence of the specification errors considered in the first part of the paper. The tests are developed by comparing the distribution of residuals under the hypothesis that the specification of the model is correct to the distribution of the residuals yielded under the alternative hypothesis that there is a specification error of one of the types considered in the first part of the paper. As a preliminary step to deriving the test procedures the classical least-squares residual vector is transformed to a sub-vector which has more desirable properties for testing the null hypothesis that the specification of the model is correct. Also, under certain assumptions, with respect to the alternative hypothesis, it is shown that the mean vector of the residuals can be approximated by a linear sum of vectors qj,

2,269 citations

Journal ArticleDOI
TL;DR: This work studies the problem of choosing an optimal feature set for land use classification based on SAR satellite images using four different texture models and shows that pooling features derived from different texture Models, followed by a feature selection results in a substantial improvement in the classification accuracy.
Abstract: A large number of algorithms have been proposed for feature subset selection. Our experimental results show that the sequential forward floating selection algorithm, proposed by Pudil et al. (1994), dominates the other algorithms tested. We study the problem of choosing an optimal feature set for land use classification based on SAR satellite images using four different texture models. Pooling features derived from different texture models, followed by a feature selection results in a substantial improvement in the classification accuracy. We also illustrate the dangers of using feature selection in small sample size situations.

2,238 citations

Journal ArticleDOI
TL;DR: A revised definition and classification of cerebral palsy is presented to meet the needs of clinicians, investigators, and health officials, and provide a common language for improved communication.
Abstract: Because of the availability of new knowledge about the neurobiology of developmental brain injury, information that epidemiology and modern brain imaging is providing, the availability of more precise measuring instruments of patient performance, and the increase in studies evaluating the efficacy of therapy for the consequences of injury, the need for reconsideration of the definition and classification of cerebral palsy (CP) has become evident. Pertinent material was reviewed at an international symposium participated in by selected leaders in the preclinical and clinical sciences. Suggestions were made about the content of a revised definition and classification of CP that would meet the needs of clinicians, investigators, and health officials, and provide a common language for improved communication. With leadership and direction from an Executive Committee, panels utilized this information and have generated a revised Definition and Classification of Cerebral Palsy. The Executive Committee presents this revision and welcomes substantive comments about it.

2,214 citations


Authors

Showing all 60636 results

NameH-indexPapersCitations
David Miller2032573204840
Anil K. Jain1831016192151
D. M. Strom1763167194314
Feng Zhang1721278181865
Derek R. Lovley16858295315
Donald G. Truhlar1651518157965
Donald E. Ingber164610100682
J. E. Brau1621949157675
Murray F. Brennan16192597087
Peter B. Reich159790110377
Wei Li1581855124748
Timothy C. Beers156934102581
Claude Bouchard1531076115307
Mercouri G. Kanatzidis1521854113022
James J. Collins15166989476
Network Information
Related Institutions (5)
University of California, Davis
180K papers, 8M citations

97% related

University of Illinois at Urbana–Champaign
225.1K papers, 10.1M citations

97% related

University of Minnesota
257.9K papers, 11.9M citations

97% related

University of Wisconsin-Madison
237.5K papers, 11.8M citations

97% related

Cornell University
235.5K papers, 12.2M citations

97% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023250
2022752
20217,041
20206,870
20196,548
20185,779