scispace - formally typeset
Search or ask a question

Showing papers by "Ohio State University published in 2004"


Journal ArticleDOI
TL;DR: It is argued the importance of directly testing the significance of indirect effects and provided SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals to enhance the frequency of formal mediation tests in the psychology literature.
Abstract: Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

15,041 citations


Journal ArticleDOI
Rattan Lal1
11 Jun 2004-Science
TL;DR: In this article, the carbon sink capacity of the world’s agricultural and degraded soils is 50 to 66% of the historic carbon loss of 42 to 78 gigatons of carbon.
Abstract: :The carbon sink capacity of the world’s agricultural and degraded soils is 50 to 66% of the historic carbon loss of 42 to 78 gigatons of carbon. The rate of soil organic carbon sequestration with adoption of recommended technologies depends on soil texture and structure, rainfall, temperature, farming system, and soil management. Strategies to increase the soil carbon pool include soil restoration and woodland regeneration, no-till farming, cover crops, nutrient management, manuring and sludge application, improved grazing, water conservation and harvesting, efficient irrigation, agroforestry practices, and growing energy crops on spare lands. An increase of 1 ton of soil carbon pool of degraded cropland soils may increase crop yield by 20 to 40 kilograms per hectare (kg/ha) for wheat, 10 to 20 kg/ha for maize, and 0.5 to 1 kg/ha for cowpeas. As well as enhancing food security, carbon sequestration has the potential to offset fossilfuel emissions by 0.4 to 1.2 gigatons of carbon per year, or 5 to 15% of the global fossil-fuel emissions.

5,835 citations


Journal ArticleDOI
TL;DR: In this paper, the authors measured cosmological parameters using the three-dimensional power spectrum P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with WMAP and other data.
Abstract: We measure cosmological parameters using the three-dimensional power spectrum P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with WMAP and other data. Our results are consistent with a "vanilla" flat adiabaticCDM model without tilt (ns = 1), running tilt, tensor modes or massive neutrinos. Adding SDSS information more than halves the WMAP-only error bars on some parameters, tightening 1� constraints on the Hubble parameter from h � 0.74 +0.18 −0.07 to h � 0.70 +0.04 −0.03, on the matter density from m � 0.25 ± 0.10 to m � 0.30 ± 0.04 (1�) and on neutrino masses from < 11 eV to < 0.6 eV (95%). SDSS helps even more when dropping prior assumptions about curvature, neutrinos, tensor modes and the equation of state. Our results are in substantial agreement with the joint analysis of WMAP and the 2dF Galaxy Redshift Survey, which is an impressive consistency check with independent redshift survey data and analysis techniques. In this paper, we place particular emphasis on clarifying the physical origin of the constraints, i.e., what we do and do not know when using different data sets and prior assumptions. For instance, dropping the assumption that space is perfectly flat, the WMAP-only constraint on the measured age of the Universe tightens from t0 � 16.3 +2.3

3,938 citations


Journal ArticleDOI
Rattan Lal1
01 Nov 2004-Geoderma
TL;DR: In this article, the authors proposed a sustainable management of soil organic carbon (SOC) pool through conservation tillage with cover crops and crop residue mulch, nutrient cycling including the use of compost and manure, and other management practices.

2,931 citations


Journal ArticleDOI
TL;DR: This commentary summarizes the Workshop presentations on HNPCC and MSI testing; presents the issues relating to the performance, specificity, and specificity of the Bethesda Guidelines; outlines the revised Bethesda Guidelines for identifying individuals at risk for H NPCC; and recommend criteria for MSI testing.
Abstract: Hereditary nonpolyposis colorectal cancer (HNPCC), also known as Lynch syndrome, is a common autosomal dominant syndrome characterized by early age at onset, neoplastic lesions, and microsatellite instability (MSI). Because cancers with MSI account for approximately 15% of all colorectal cancers and because of the need for a better understanding of the clinical and histologic manifestations of HNPCC, the National Cancer Institute hosted an international workshop on HNPCC in 1996, which led to the development of the Bethesda Guidelines for the identification of individuals with HNPCC who should be tested for MSI. To consider revision and improvement of the Bethesda Guidelines, another HNPCC workshop was held at the National Cancer Institute in Bethesda, MD, in 2002. In this commentary, we summarize the Workshop presentations on HNPCC and MSI testing; present the issues relating to the performance, sensitivity, and specificity of the Bethesda Guidelines; outline the revised Bethesda Guidelines for identifying individuals at risk for HNPCC; and recommend criteria for MSI testing.

2,899 citations


Journal ArticleDOI
TL;DR: Evidence is provided that psychological stress--both perceived stress and chronicity of stress--is significantly associated with higher oxidative stress, lower telomerase activity, and shorter telomere length, in peripheral blood mononuclear cells from healthy premenopausal women.
Abstract: Numerous studies demonstrate links between chronic stress and indices of poor health, including risk factors for cardiovascular disease and poorer immune function. Nevertheless, the exact mechanisms of how stress gets “under the skin” remain elusive. We investigated the hypothesis that stress impacts health by modulating the rate of cellular aging. Here we provide evidence that psychological stress— both perceived stress and chronicity of stress—is significantly associated with higher oxidative stress, lower telomerase activity, and shorter telomere length, which are known determinants of cell senescence and longevity, in peripheral blood mononuclear cells from healthy premenopausal women. Women with the highest levels of perceived stress have telomeres shorter on average by the equivalent of at least one decade of additional aging compared to low stress women. These findings have implications for understanding how, at the cellular level, stress may promote earlier onset of age-related diseases.

2,706 citations


Journal ArticleDOI
TL;DR: The main advantages of the current revised classification is that it provides a clear and unequivocal description of the various lesions and classes of lupus nephritis, allowing a better standardization and lending a basis for further clinicopathologic studies.
Abstract: The currently used classification reflects our understanding of the pathogenesis of the various forms of lupus nephritis, but clinicopathologic studies have revealed the need for improved categorization and terminology. Based on the 1982 classification published under the auspices of the World Health Organization (WHO) and subsequent clinicopathologic data, we propose that class I and II be used for purely mesangial involvement (I, mesangial immune deposits without mesangial hypercellularity; II, mesangial immune deposits with mesangial hypercellularity); class III for focal glomerulonephritis (involving or = 50% of total number of glomeruli) either with segmental (class IV-S) or global (class IV-G) involvement, and also with subdivisions for active and sclerotic lesions; class V for membranous lupus nephritis; and class VI for advanced sclerosing lesions]. Combinations of membranous and proliferative glomerulonephritis (i.e., class III and V or class IV and V) should be reported individually in the diagnostic line. The diagnosis should also include entries for any concomitant vascular or tubulointerstitial lesions. One of the main advantages of the current revised classification is that it provides a clear and unequivocal description of the various lesions and classes of lupus nephritis, allowing a better standardization and lending a basis for further clinicopathologic studies. We hope that this revision, which evolved under the auspices of the International Society of Nephrology and the Renal Pathology Society, will contribute to further advancement of the WHO classification.

2,004 citations


Journal ArticleDOI
TL;DR: In this article, a reanalysis of broad emission-line reverberation-mapping data was carried out for 35 active galactic nuclei (AGNs) based on a complete and consistent reanalysis, and it was shown that the highest precision measure of the virial product cτΔV2/G is obtained by using the cross-correlation function centroid (cf.
Abstract: We present improved black hole masses for 35 active galactic nuclei (AGNs) based on a complete and consistent reanalysis of broad emission-line reverberation-mapping data From objects with multiple line measurements, we find that the highest precision measure of the virial product cτΔV2/G, where τ is the emission-line lag relative to continuum variations and ΔV is the emission-line width, is obtained by using the cross-correlation function centroid (as opposed to the cross-correlation function peak) for the time delay and the line dispersion (as opposed to FWHM) for the line width and by measuring the line width in the variable part of the spectrum Accurate line-width measurement depends critically on avoiding contaminating features, in particular the narrow components of the emission lines We find that the precision (or random component of the error) of reverberation-based black hole mass measurements is typically around 30%, comparable to the precision attained in measurement of black hole masses in quiescent galaxies by gas or stellar dynamical methods Based on results presented in a companion paper by Onken et al, we provide a zero-point calibration for the reverberation-based black hole mass scale by using the relationship between black hole mass and host-galaxy bulge velocity dispersion The scatter around this relationship implies that the typical systematic uncertainties in reverberation-based black hole masses are smaller than a factor of 3 We present a preliminary version of a mass-luminosity relationship that is much better defined than any previous attempt Scatter about the mass-luminosity relationship for these AGNs appears to be real and could be correlated with either Eddington ratio or object inclination

1,893 citations


Journal ArticleDOI
TL;DR: In this paper, it is argued that, in some circumstances, adopting the effectiveness of business processes as a dependent variable may be more appropriate than adopting overall firm performance as a dependant variable.
Abstract: A growing body of empirical literature supports key assertions of the resource-based view. However, most of this work examines the impact of firm-specific resources on the overall performance of a firm. In this paper it is argued that, in some circumstances, adopting the effectiveness of business processes as a dependent variable may be more appropriate than adopting overall firm performance as a dependent variable. This idea is tested by examining the determinants of the effectiveness of the customer service business process in a sample of North American insurance companies. Results are consistent with resource-based expectations, and they show that distinctive advantages observable at the process level are not necessarily reflected in firm level performance. The implications of these findings for research and practice are discussed along with a discussion of the relationship between resources and capabilities, on the one hand, and business processes, activities, and routines, on the other. Copyright © 2003 John Wiley & Sons, Ltd.

1,787 citations



Journal ArticleDOI
TL;DR: In this paper, the authors employed a matrix-based method using pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions.
Abstract: We measure the large-scale real-space power spectrum P(k) by using a sample of 205,443 galaxies from the Sloan Digital Sky Survey, covering 2417 effective square degrees with mean redshift z ≈ 0.1. We employ a matrix-based method using pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well-behaved window functions in the range 0.02 h Mpc-1 < k < 0.3 h Mpc-1. We pay particular attention to modeling, quantifying, and correcting for potential systematic errors, nonlinear redshift distortions, and the artificial red-tilt caused by luminosity-dependent bias. Our results are robust to omitting angular and radial density fluctuations and are consistent between different parts of the sky. Our final result is a measurement of the real-space matter power spectrum P(k) up to an unknown overall multiplicative bias factor. Our calculations suggest that this bias factor is independent of scale to better than a few percent for k < 0.1 h Mpc-1, thereby making our results useful for precision measurements of cosmological parameters in conjunction with data from other experiments such as the Wilkinson Microwave Anisotropy Probe satellite. The power spectrum is not well-characterized by a single power law but unambiguously shows curvature. As a simple characterization of the data, our measurements are well fitted by a flat scale-invariant adiabatic cosmological model with h Ωm = 0.213 ± 0.023 and σ8 = 0.89 ± 0.02 for L* galaxies, when fixing the baryon fraction Ωb/Ωm = 0.17 and the Hubble parameter h = 0.72; cosmological interpretation is given in a companion paper.


Book
29 Mar 2004
TL;DR: The Cox Proportional Hazards model is used for event history analysis as a guide to modeling strategies for unobserved heterogeneity in political analysis and event history.
Abstract: Event History Modeling, first published in 2004, provides an accessible guide to event history analysis for researchers and advanced students in the social sciences. The substantive focus of many social science research problems leads directly to the consideration of duration models, and many problems would be better analyzed by using these longitudinal methods to take into account not only whether the event happened, but when. The foundational principles of event history analysis are discussed and ample examples are estimated and interpreted using standard statistical packages, such as STATA and S-Plus. Critical innovations in diagnostics are discussed, including testing the proportional hazards assumption, identifying outliers, and assessing model fit. The treatment of complicated events includes coverage of unobserved heterogeneity, repeated events, and competing risks models. The authors point out common problems in the analysis of time-to-event data in the social sciences and make recommendations regarding the implementation of duration modeling methods.

Journal ArticleDOI
TL;DR: In this paper, the maximum limits of the Eurasian ice sheets during four glaciations have been reconstructed: (1) the Late Saalian (>140 ka), (2) the Early Weichselian (100-80 ka),(3) the Middle Weichsellian (60-50 ka), and (4) the late Weichselsian (25-15 ka) based on satellite data and aerial photographs combined with geological field investigations in Russia and Siberia, and with marine seismic and sediment core data.


Journal ArticleDOI
TL;DR: The RBB + C method resulted in a 1.5- to 6-fold increase in DNA yield when compared to three other widely used methods and resulted in improved denaturing gradient gel electrophoresis (DGGE) profiles, which is indicative of a more complete lysis and representation of microbial diversity present in such samples.
Abstract: Several DNA extraction methods have been reported for use with digesta or fecal samples, but problems are often encountered in terms of relatively low DNA yields and/or recovering DNA free of inhibitory substances. Here we report a modified method to extract PCR-quality microbial community DNA from these types of samples, which employs bead beating in the presence of high concentrations of sodium dodecyl sulfate (SDS), salt, and EDTA, and with subsequent DNA purification by QIA® columns [referred to as repeated bead beating plus column (RBB+C) method]. The RBB+C method resulted in a 1.5- to 6-fold increase in DNA yield when compared to three other widely used methods. The community DNA prepared with the RBB+C method was also free of inhibitory substances and resulted in improved denaturing gradient gel electrophoresis (DGGE) profiles, which is indicative of a more complete lysis and representation of microbial diversity present in such samples.

Journal ArticleDOI
16 Apr 2004-Science
TL;DR: It is demonstrated that Aβ-binding alcohol dehydrogenase (ABAD) is a direct molecular link from Aβ to mitochondrial toxicity and the ABAD-Aβ interaction may be a therapeutic target in Alzheimer's disease.
Abstract: Mitochondrial dysfunction is a hallmark of beta-amyloid (Abeta)-induced neuronal toxicity in Alzheimer's disease (AD). Here, we demonstrate that Abeta-binding alcohol dehydrogenase (ABAD) is a direct molecular link from Abeta to mitochondrial toxicity. Abeta interacts with ABAD in the mitochondria of AD patients and transgenic mice. The crystal structure of Abeta-bound ABAD shows substantial deformation of the active site that prevents nicotinamide adenine dinucleotide (NAD) binding. An ABAD peptide specifically inhibits ABAD-Abeta interaction and suppresses Abeta-induced apoptosis and free-radical generation in neurons. Transgenic mice overexpressing ABAD in an Abeta-rich environment manifest exaggerated neuronal oxidative stress and impaired memory. These data suggest that the ABAD-Abeta interaction may be a therapeutic target in AD.

Journal ArticleDOI
Rattan Lal1
TL;DR: The available information on energy use in farm operations, and its conversion into carbon equivalent (CE) is a synthesis of the available information and shows that an output/input ratio, expressed either as gross or net output of C, must be >1 and has an increasing trend over time.

Journal ArticleDOI
TL;DR: Although there was substantial model mimicry, empirical conditions were identified under which the models make discriminably different predictions and the best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed distributions of criteria.
Abstract: The authors evaluated 4 sequential sampling models for 2-choice decisions--the Wiener diffusion, Ornstein-Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models--by fitting them to the response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented with assumptions of variability across trials in the rate of accumulation of evidence from stimuli, the values of response criteria, and the value of base RT across trials. Although there was substantial model mimicry, empirical conditions were identified under which the models make discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed (exponential) distributions of criteria, although the last was unable to produce error RTs shorter than correct RTs. The relationship between these models and 3 recent, neurally inspired models was also examined.

Journal ArticleDOI
TL;DR: The nature and dynamics of the singlet excited electronic states created in nucleic acids and their constituents by UV light are reviewed, finding that these states are highly stable to photochemical decay, perhaps as a result of selection pressure during a long period of molecular evolution.
Abstract: The scope of this review is the nature and dynamics of the singlet excited electronic states created in nucleic acids and their constituents by UV light. Interest in the UV photochemistry of nucleic acids has long been the motivation for photophysical studies of the excited states, because these states are at the beginning of the complex chain of events that culminates in photodamage. UV-induced damage to DNA has profound biological consequences, including photocarcinogenesis, a growing human health problem.1-3 Sunlight, which is essential for life on earth, contains significant amounts of harmful UV (λ < 400 nm) radiation. These solar UV photons constitute one of the most ubiquitous and potent environmental carcinogens. This extraterrestrial threat is impressive for its long history; photodamage is as old as life itself. The genomic information encoded by these biopolymers has been under photochemical attack for billions of years. It is not surprising then that the excited states of the nucleic acid bases (see Chart 1), the most important UV chromophores of nucleic acids, are highly stable to photochemical decay, perhaps as a result of selection pressure during a long period of molecular evolution. This photostability is due to remarkably rapid decay pathways for electronic energy, which are only now coming into focus through femtosecond laser spectroscopy. The recently completed map of the human genome and the ever-expanding crystallographic database of nucleic acid structures are two examples that illustrate the richly detailed information currently available about the static properties of nucleic acids. In contrast, much less is known about the dynamics of these macromolecules. This is particularly true of the dynamics of the excited states that play a critical role in DNA photodamage. Efforts to study nucleic acids by time-resolved spectroscopy have been stymied by the apparent lack of suitable fluorophores. In contrast, dynamical spectroscopy of proteins has flourished thanks to intrinsically fluorescent amino acids such as tryptophan, tyrosine, and phenylalanine.4 The primary UVabsorbing constituents of nucleic acids, the nucleic acid bases, have vanishingly small fluorescence quantum yields under physiological conditions of temperature and pH.5 In fact, the bases were frequently described as “nonfluorescent” in the early literature. * To whom correspondence should be addressed. E-mail: kohler@ chemistry.ohio-state.edu. Phone: (614) 688-3944. Fax: (614) 2921685. 1977 Chem. Rev. 2004, 104, 1977−2019

Journal ArticleDOI
TL;DR: The authors synthesize existing research to discuss how teachers' practice and student learning are affected by perceptions of collective efficacy, and develop a conceptual model to explain the formation and influence of perceived collective efficacy in schools.
Abstract: This analysis synthesizes existing research to discuss how teachers’ practice and student learning are affected by perceptions of collective efficacy. Social cognitive theory is employed to explain that the choices teachers make—the ways in which they exercise personal agency—are strongly influenced by collective efficacy beliefs. Although empirically related, teacher and collective efficacy perceptions are theoretically distinct constructs, each having unique effects on educational decisions and student achievement. Our purpose is to advance awareness about perceived collective efficacy and develop a conceptual model to explain the formation and influence of perceived collective efficacy in schools. We also examine the relevance of efficacy beliefs to teachers’ professional work and outline future research possibilities.

Journal ArticleDOI
TL;DR: The second data release of the Sloan Digital Sky Survey (SDSS) as mentioned in this paper is the most recent data set to be publicly available, which consists of 3.5 million unique objects, 367,360 spectra of galaxies, quasars, stars, and calibrating blank sky patches selected over 2627 deg2 of this area.
Abstract: The Sloan Digital Sky Survey (SDSS) has validated and made publicly available its Second Data Release. This data release consists of 3324 deg2 of five-band (ugriz) imaging data with photometry for over 88 million unique objects, 367,360 spectra of galaxies, quasars, stars, and calibrating blank sky patches selected over 2627 deg2 of this area, and tables of measured parameters from these data. The imaging data reach a depth of r ≈ 22.2 (95% completeness limit for point sources) and are photometrically and astrometrically calibrated to 2% rms and 100 mas rms per coordinate, respectively. The imaging data have all been processed through a new version of the SDSS imaging pipeline, in which the most important improvement since the last data release is fixing an error in the model fits to each object. The result is that model magnitudes are now a good proxy for point-spread function magnitudes for point sources, and Petrosian magnitudes for extended sources. The spectroscopy extends from 3800 to 9200 A at a resolution of 2000. The spectroscopic software now repairs a systematic error in the radial velocities of certain types of stars and has substantially improved spectrophotometry. All data included in the SDSS Early Data Release and First Data Release are reprocessed with the improved pipelines and included in the Second Data Release. Further characteristics of the data are described, as are the data products themselves and the tools for accessing them.

Journal ArticleDOI
TL;DR: Patterns of neural firing linked to eye movement decisions show that behavioral decisions are predicted by the differential firing rates of cells coding selected and nonselected stimulus alternatives, which provides a quantitative link between the time-course of behavioral decisions and the growth of stimulus information in neural firing data.

Journal ArticleDOI
TL;DR: Transportation into a narrative world is an experience of cognitive, emotional, and imagery involvement in a narrative as discussed by the authors, and it can benefit from the experience of being immersed in a narrated world, as well as from the consequences of that immersion.
Abstract: “Transportation into a narrative world” is an experience of cognitive, emotional, and imagery involvement in a narrative. Transportation theory (Green & Brock, 2000, 2002) provides a lens for understanding the concept of media enjoyment. The theory suggests that enjoyment can benefit from the experience of being immersed in a narrative world, as well as from the consequences of that immersion. Consequences implied by transportation theory include connections with characters and self-transformations.

Journal ArticleDOI
TL;DR: The proportion of women who attempt vaginal delivery after prior cesarean delivery has decreased largely because of concern about safety, and the absolute and relative risks associated with a trial of labor in women with a history of cesAREan delivery are uncertain.
Abstract: background The proportion of women who attempt vaginal delivery after prior cesarean delivery has decreased largely because of concern about safety The absolute and relative risks associated with a trial of labor in women with a history of cesarean delivery, as compared with elective repeated cesarean delivery without labor, are uncertain

Journal ArticleDOI
TL;DR: This paper studies the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets using a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration.

Journal ArticleDOI
TL;DR: Although the presence of an unmutated IgV(H) gene is strongly associated with the expression of Zap-70, ZAP-70 is a stronger predictor of the need for treatment in B-cell CLL.
Abstract: Background The course of chronic lymphocytic leukemia (CLL) is variable. In aggressive disease, the CLL cells usually express an unmutated immunoglobulin heavy-chain variable-region gene (IgVH ) and the 70-kD zeta-associated protein (ZAP-70), whereas in indolent disease, the CLL cells usually express mutated IgVH but lack expression of ZAP-70. Methods We evaluated the CLL B cells from 307 patients with CLL for ZAP-70 and mutations in the rearranged IgVH gene. We then investigated the association between the results and the time from diagnosis to initial therapy. Results We found that ZAP-70 was expressed above a defined threshold level in 117 of the 164 patients with an unmutated IgVH gene (71 percent), but in only 24 of the 143 patients with a mutated IgVH gene (17 percent, P<0.001). Among the patients with ZAP-70–positive CLL cells, the median time from diagnosis to initial therapy in those who had an unmutated IgVH gene (2.8 years) was not significantly different from the median time in those who had a...

Journal ArticleDOI
TL;DR: Lumpectomy plus adjuvant therapy with tamoxifen alone is a realistic choice for the treatment of women 70 years of age or older who have early, estrogen-receptor-positive breast cancer.
Abstract: BACKGROUND In women 70 years of age or older who have early breast cancer, it is unclear whether lumpectomy plus tamoxifen is as effective as lumpectomy followed by tamoxifen plus radiation therapy. METHODS Between July 1994 and February 1999, we randomly assigned 636 women who were 70 years of age or older and who had clinical stage I (T1N0M0 according to the tumor-node-metastasis classification), estrogen-receptor-positive breast carcinoma treated by lumpectomy to receive tamoxifen plus radiation therapy (317 women) or tamoxifen alone (319 women). Primary end points were the time to local or regional recurrence, the frequency of mastectomy for recurrence, breast-cancer-specific survival, the time to distant metastasis, and overall survival. RESULTS The only significant difference between the two groups was in the rate of local or regional recurrence at five years (1 percent in the group given tamoxifen plus irradiation and 4 percent in the group given tamoxifen alone, P<0.001). There were no significant differences between the two groups with regard to the rates of mastectomy for local recurrence, distant metastases, or five-year rates of overall survival (87 percent in the group given tamoxifen plus irradiation and 86 percent in the tamoxifen group, P=0.94). Assessment by physicians and patients of cosmetic results and adverse events uniformly rated tamoxifen plus irradiation inferior to tamoxifen alone. CONCLUSIONS Lumpectomy plus adjuvant therapy with tamoxifen alone is a realistic choice for the treatment of women 70 years of age or older who have early, estrogen-receptor-positive breast cancer.

Journal ArticleDOI
TL;DR: In this paper, asymptotic properties of the maximum likelihood estimators and the quasi-maximum likelihood estimator for the spatial autoregressive model were investigated. But the convergence rates of those estimators may depend on some general features of the spatial weights matrix of the model.
Abstract: This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi-maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with dif- ferent spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have >/n-rate of convergence and be asymp- totically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.

Journal ArticleDOI
TL;DR: Wagner's conjecture is proved, that for every infinite set of finite graphs, one of its members is isomorphic to a minor of another.