scispace - formally typeset
Search or ask a question

Showing papers in "Statistics in Medicine in 1995"


Journal ArticleDOI
TL;DR: The proposed test can detect clusters of any size, located anywhere in the study region, and is not restricted to clusters that conform to predefined administrative or political borders.
Abstract: We present a new method of detection and inference for spatial clusters of a disease. To avoid ad hoc procedures to test for clustering, we have a clearly defined alternative hypothesis and our test statistic is based on the likelihood ratio. The proposed test can detect clusters of any size, located anywhere in the study region. It is not restricted to clusters that conform to predefined administrative or political borders. The test can be used for spatially aggregated data as well as when exact geographic co-ordinates are known for each individual. We illustrate the method on a data set describing the occurrence of leukaemia in Upstate New York.

1,452 citations


Journal ArticleDOI
TL;DR: Probabilistic linkage technology makes it feasible and efficient to link large public health databases in a statistically justifiable manner by linking highway crashes to Emergency Medical Service reports and to hospital admission records for the National Highway Traffic Safety Administration (NHTSA).
Abstract: Probabilistic linkage technology makes it feasible and efficient to link large public health databases in a statistically justifiable manner. The problem addressed by the methodology is that of matching two files of individual data under conditions of uncertainty. Each field is subject to error which is measured by the probability that the field agrees given a record pair matches (called the m probability) and probabilities of chance agreement of its value states (called the u probability). Fellegi and Sunter pioneered record linkage theory. Advances in methodology include use of an EM algorithm for parameter estimation, optimization of matches by means of a linear sum assignment program, and more recently, a probability model that addresses both m and u probabilities for all value states of a field. This provides a means for obtaining greater precision from non-uniformly distributed fields, without the theoretical complications arising from frequency-based matching alone. The model includes an interative parameter estimation procedure that is more robust than pre-match estimation techniques. The methodology was originally developed and tested by the author at the U.S. Census Bureau for census undercount estimation. The more recent advances and a new generalized software system were tested and validated by linking highway crashes to Emergency Medical Service (EMS) reports and to hospital admission records for the National Highway Traffic Safety Administration (NHTSA).

854 citations


Journal ArticleDOI
TL;DR: The random-effects regression method performs well in the context of a meta-analysis of the efficacy of a vaccine for the prevention of tuberculosis, where certain factors are thought to modify vaccine efficacy.
Abstract: Many meta-analyses use a random-effects model to account for heterogeneity among study results, beyond the variation associated with fixed effects. A random-effects regression approach for the synthesis of 2 x 2 tables allows the inclusion of covariates that may explain heterogeneity. A simulation study found that the random-effects regression method performs well in the context of a meta-analysis of the efficacy of a vaccine for the prevention of tuberculosis, where certain factors are thought to modify vaccine efficacy. A smoothed estimator of the within-study variances produced less bias in the estimated regression coefficients. The method provided very good power for detecting a non-zero intercept term (representing overall treatment efficacy) but low power for detecting a weak covariate in a meta-analysis of 10 studies. We illustrate the model by exploring the relationship between vaccine efficacy and one factor thought to modify efficacy. The model also applies to the meta-analysis of continuous outcomes when covariates are present.

743 citations


Journal ArticleDOI
TL;DR: To compute the sample size needed to achieve the planned power for a t-test, one needs an estimate of the population standard deviation sigma, and Monte Carlo simulations indicate that using a 100(1-gamma) per cent upper one-sided confidence limit on sigma will provide a sample size sufficient to achieve that power.
Abstract: To compute the sample size needed to achieve the planned power for a t-test, one needs an estimate of the population standard deviation sigma. If one uses the sample standard deviation from a small pilot study as an estimate of sigma, it is quite likely that the actual power for the planned study will be less than the planned power. Monte Carlo simulations indicate that using a 100(1-gamma) per cent upper one-sided confidence limit on sigma will provide a sample size sufficient to achieve the planned power in at least 100(1-gamma) per cent of such trials.

634 citations


Journal ArticleDOI
TL;DR: It is described how a full Bayesian analysis can deal with unresolved issues, such as the choice between fixed- and random-effects models, the choice of population distribution in a random- effects analysis, the treatment of small studies and extreme results, and incorporation of study-specific covariates.
Abstract: Current methods for meta-analysis still leave a number of unresolved issues, such as the choice between fixed- and random-effects models, the choice of population distribution in a random-effects analysis, the treatment of small studies and extreme results, and incorporation of study-specific covariates. We describe how a full Bayesian analysis can deal with these and other issues in a natural way, illustrated by a recent published example that displays a number of problems. Such analyses are now generally available using the BUGS implementation of Markov chain Monte Carlo numerical integration techniques. Appropriate proper prior distributions are derived, and sensitivity analysis to a variety of prior assumptions carried out. Current methods are briefly summarized and compared to the full Bayes analysis.

535 citations


Journal ArticleDOI
TL;DR: This work provides an alternative to the maximum likelihood method for making inferences about the parameters of the logistic regression model based on appropriate permutational distributions of sufficient statistics.
Abstract: We provide an alternative to the maximum likelihood method for making inferences about the parameters of the logistic regression model. The method is based appropriate permutational distributions of sufficient statistics. It is useful for analysing small or unbalanced binary data with covariates. It also applies to small-sample clustered binary data. We illustrate the method by analysing several biomedical data sets.

469 citations


Journal ArticleDOI
TL;DR: A Bayesian model in which both area-specific intercept and trend are modelled as random effects and correlation between them is allowed for is proposed, an extension of that originally proposed for disease mapping.
Abstract: The analysis of variation of risk for a given disease in space and time is a key issue in descriptive epidemiology. When the data are scarce, maximum likelihood estimates of the area-specific risk and of its linear time-trend can be seriously affected by random variation. In this paper, we propose a Bayesian model in which both area-specific intercept and trend are modelled as random effects and correlation between them is allowed for. This model is an extension of that originally proposed for disease mapping. It is illustrated by the analysis of the cumulative prevalence of insulin dependent diabetes mellitus as observed at the military examination of 18-year-old conscripts born in Sardinia during the period 1936-1971. Data concerning the genetic differentiation of the Sardinian population are used to interpret the results.

446 citations


Journal ArticleDOI
TL;DR: Modifications to the Continual Reassessment Method (CRM) are presented, in which one assigns more than one subject at a time to each dose level, and each dose increase is limited to one level, which makes the CRM acceptable to clinical investigators.
Abstract: The Continual Reassessment Method (CRM) is a Bayesian phase I design whose purpose is to estimate the maximum tolerated dose of a drug that will be used in subsequent phase II and III studies. Its acceptance has been hindered by the greater duration of CRM designs compared to standard methods, as well as by concerns with excessive experimentation at high dosage levels, and with more frequent and severe toxicity. This paper presents the results of a simulation study in which one assigns more than one subject at a time to each dose level, and each dose increase is limited to one level. We show that these modifications address all of the most serious criticisms of the CRM, reducing the duration of the trial by 50-67 per cent, reducing toxicity incidence by 20-35 per cent, and lowering toxicity severity. These are achieved with minimal effects on accuracy. Most important, based on our experience at our institution, such modifications make the CRM acceptable to clinical investigators.

442 citations


Journal ArticleDOI
TL;DR: It is recommended that log transformed analyses should frequently be preferred to untransformed analyses and that careful consideration should be given to use of a log transformation at the protocol design stage.
Abstract: The logarithmic (log) transformation is a simple yet controversial step in the analysis of positive continuous data measured on an interval scale Situations where a log transformation is indicated will be reviewed This paper contends that the log transformation should not be classed with other transformations as it has particular advantages Problems with using the data themselves to decide whether or not to transform will be discussed It is recommended that log transformed analyses should frequently be preferred to untransformed analyses and that careful consideration should be given to use of a log transformation at the protocol design stage

442 citations


Journal ArticleDOI
TL;DR: Meta-analyses using updated individual patient data may provide the most reliable means of combining data from similar randomized controlled trials, and practical advice on initiating and maintaining collaboration and methods of data checking and validation are given.
Abstract: Meta-analyses using updated individual patient data may provide the most reliable means of combining data from similar randomized controlled trials. The benefits of this approach to systematic reviews are described. Guidance, based on the experience of several groups who have undertaken such projects, is given. This includes practical advice on initiating and maintaining collaboration, the time and resources required to undertake these usually international projects and methods of data checking and validation. Example proforma are included.

426 citations


Journal ArticleDOI
TL;DR: Eight graphical methods for detecting violations of the proportional hazards assumption are described and each is demonstrated on three published datasets with a single binary covariate.
Abstract: A major assumption of the Cox proportional hazards model is that the effect of a given covariate does not change over time. If this assumption is violated, the simple Cox model is invalid, and more sophisticated analyses are required. This paper describes eight graphical methods for detecting violations of the proportional hazards assumption and demonstrates each on three published datasets with a single binary covariate. I discuss the relative merits of these methods. Smoothed plots of the scaled Schoenfeld residuals are recommended for assessing PH violations because they provide precise usable information about the time dependence of the covariate effects.

Journal ArticleDOI
TL;DR: This paper presents an approach to modelling censored survival data using the input-output relationship associated with a simple feed-forward neural network as the basis for a non-linear proportional hazards model.
Abstract: Neural networks have received considerable attention recently, mostly by non-statisticians. They are considered by many to be very promising tools for classification and prediction. In this paper we present an approach to modelling censored survival data using the input-output relationship associated with a simple feed-forward neural network as the basis for a non-linear proportional hazards model. This approach can be extended to other models used with censored survival data. The proportional hazards neural network parameters are estimated using the method of maximum likelihood. These maximum likelihood based models can be compared, using readily available techniques such as the likelihood ratio test and the Akaike criterion. The neural network models are illustrated using data on the survival of men with prostatic carcinoma. A method of interpreting the neural network predictions based on the factorial contrasts is presented.

Journal ArticleDOI
TL;DR: This work proposes an approach based on multiple imputation of theMissing responses, using the approximate Bayesian bootstrap to draw ignorable repeated imputations from the posterior predictive distribution of the missing data, stratifying by a balancing score for the observed responses prior to withdrawal.
Abstract: Clinical trials of drug treatments for psychiatric disorders commonly employ the parallel groups, placebo-controlled, repeated measure randomized comparison. When patients stop adhering to their originally assigned treatment, investigators often abandon data collection. Thus, non-adherence produces a monotone pattern of unit-level missing data, disabling the analysis by intent-to-treat. We propose an approach based on multiple imputation of the missing responses, using the approximate Bayesian bootstrap to draw ignorable repeated imputations from the posterior predictive distribution of the missing data, stratifying by a balancing score for the observed responses prior to withdrawal. We apply the method and some variations to data from a large randomized trial of treatments for panic disorder, and compare the results to those obtained by the original analysis that used the standard (endpoint) method.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the sensitivity of the rate ratio estimates to the choice of the hyperprior distribution of the dispersion parameter via a simulation study and compared the performance of the FB approach to mapping disease risk to the conventional approach of mapping maximum likelihood (ML) estimates and p-values.
Abstract: In the fully Bayesian (FB) approach to disease mapping the choice of the hyperprior distribution of the dispersion parameter is a key issue. In this context we investigated the sensitivity of the rate ratio estimates to the choice of the hyperprior via a simulation study. We also compared the performance of the FB approach to mapping disease risk to the conventional approach of mapping maximum likelihood (ML) estimates and p-values. The study was modelled on the incidence data of insulin dependent diabetes mellitus (IDDM) as observed in the communes of Sardinia.

Journal ArticleDOI
TL;DR: A Bayesian approach for monitoring multiple outcomes in single-arm cancer trials, including bio-chemotherapy acute leukaemia trials, bone marrow transplantation trials, and an anti-infection trial is presented.
Abstract: We present a Bayesian approach for monitoring multiple outcomes in single-arm clinical trials. Each patient's response may include both adverse events and efficacy outcomes, possibly occurring at different study times. We use a Dirichlet-multinomial model to accommodate general discrete multivariate responses. We present Bayesian decision criteria and monitoring boundaries for early termination of studies with unacceptably high rates of adverse outcomes or with low rates of desirable outcomes. Each stopping rule is constructed either to maintain equivalence or to achieve a specified level of improvement of a particular event rate for the experimental treatment, compared with that of standard therapy. We avoid explicit specification of costs and a loss function. We evaluate the joint behaviour of the multiple decision rules using frequentist criteria. One chooses a design by considering several parameterizations under relevant fixed values of the multiple outcome probability vector. Applications include trials where response is the cross-product of multiple simultaneous binary outcomes, and hierarchical structures that reflect successive stages of treatment response, disease progression and survival. We illustrate the approach with a variety of single-arm cancer trials, including bio-chemotherapy acute leukaemia trials, bone marrow transplantation trials, and an anti-infection trial. The number of elementary patient outcomes in each of these trials varies from three to seven, with as many as four monitoring boundaries running simultaneously. We provide general guidelines for eliciting and parameterizing Dirichlet priors and for specifying design parameters.

Journal ArticleDOI
TL;DR: In order to assess the significance of any local peaks or troughs in the estimated risk surface, pointwise tolerance contours are introduced which can enhance a greyscale image plot of the estimate.
Abstract: We consider the problem of estimating the spatial variation in relative risks of two diseases, say, over a geographical region. Using an underlying Poisson point process model, we approach the problem as one of density ratio estimation implemented with a non-parametric kernel smoothing method. In order to assess the significance of any local peaks or troughs in the estimated risk surface, we introduce pointwise tolerance contours which can enhance a greyscale image plot of the estimate. We also propose a Monte Carlo test of the null hypothesis of constant risk over the whole region, to avoid possible over-interpretation of the estimated risk surface. We illustrate the capabilities of the methodology with two epidemiological examples.

Journal ArticleDOI
TL;DR: Methods for estimating unconditional and conditional reference intervals for foetal size and growth based on longitudinal observations are presented based on simple random-effects regression models and involve transforming both the response and the covariate.
Abstract: Methods for estimating unconditional and conditional reference intervals for foetal size and growth based on longitudinal observations are presented. The methods are based on simple random-effects regression models and involve transforming both the response and the covariate (timepoint). A dataset from a designed longitudinal study of foetal size is analysed in detail as a motivating example.

Journal ArticleDOI
TL;DR: A two-parameter Markov chain model is proposed and developed to explicitly estimate the preclinical incidence rate and the rate of transition from preclinical to clinical state without using control data, and a new estimate of sensitivity is proposed, based on the estimated parameters of the Markov process.
Abstract: The sojourn time, time spent in the preclinical detectable phase (PCDP) for chronic diseases, for example, breast cancer, plays an important role in the design and assessment of screening programmes. Traditional methods to estimate it usually assume a uniform incidence rate of preclinical disease from a randomized control group or historical data. In this paper, a two-parameter Markov chain model is proposed and developed to explicitly estimate the preclinical incidence rate (λ1) and the rate of transition from preclinical to clinical state (λ2, equivalent to the inverse of mean sojourn time) without using control data. A new estimate of sensitivity is proposed, based on the estimated parameters of the Markov process. When this method is applied to the data from the Swedish two-county study of breast cancer screening in the age group 70–74, the estimate of MST is 2·3 with 95 per cent CI ranging from 2·1 to 2·5, which is close to the result based on the traditional method but the 95 per cent CI is narrower using the Markov model. The reason for the greater precision of the latter is the fuller use of all temporal data, since the continuous exact times to events are used in our method instead of grouping them as in the traditional method. Ongoing and future researches should extend this model to include, for example, the tumour size, nodal status and malignancy grade, along with methods of simultaneously estimating sensitivity and the transition rates in the Markov process.

Journal ArticleDOI
TL;DR: Simulation study shows that the proposed 'general' test outperformed the average distance method of Whittemore et al. in most of the cluster models considered.
Abstract: This paper proposes a class of tests applicable to the detection of two types of disease clustering 'focused' and 'general' clustering. The former assesses the clustering of observed cases around the fixed point and the latter does not have any prior information on the centre of clustering. The proposed test for 'general' clustering is a generalization of the index for temporal clustering proposed by Tango in that it adjusts for differences in population densities and also in population distributions among categories of the counfounders such as age and sex. Simulation study shows that the proposed 'general' test outperformed the average distance method of Whittemore et al. in most of the cluster models considered.

Journal ArticleDOI
TL;DR: This work compares balanced randomization with four adaptive treatment allocation procedures in a clinical trial involving two treatments and concludes that Randomization is a satisfactory solution to the decision problem when the disease in question is at least moderately common.
Abstract: We compare balanced randomization with four adaptive treatment allocation procedures in a clinical trial involving two treatments. The objective is to treat as many patients in and out of the trial as effectively as possible. Randomization is a satisfactory solution to the decision problem when the disease in question is at least moderately common. Adaptive procedures are more difficult to use, but might play a role in clinical research when a substantial proportion of all patients with the disease are included in the trial.

Journal ArticleDOI
TL;DR: This paper considers an index of hospital quality performance defined as the ratio of the observed number deaths to the number predicted by a fitted logistic regression model and proposes parametric as well as bootstrap-based confidence intervals.
Abstract: This paper considers an index of hospital quality performance defined as the ratio of the observed number deaths to the number predicted by a fitted logistic regression model. We study tests and confidence intervals under two different scenarios depending on the availability of an estimate of the covariance matrix of the coefficients from the fitted logistic regression model. We propose parametric as well as bootstrap-based confidence intervals. We apply the methods to an analysis of the performance of 27 intensive care units.

Journal ArticleDOI
TL;DR: This paper investigates the important contribution of multiple public health surveillance systems to policy in chronic disease control and prevention and defines the concept of burden for chronic conditions based on data from multiple sources.
Abstract: In this paper we investigate the important contribution of multiple public health surveillance systems to policy in chronic disease control and prevention. We show that, typically, surveillance for chronic diseases relies on multiple data sources, often created for another purpose. We also define the concept of burden for chronic conditions based on data from multiple sources. An example from a state illustrates a model for combining data for use in policy development. These applications illustrate the central role of statistical methods in ensuring the appropriate use of data from multiple surveillance systems.

Journal ArticleDOI
TL;DR: Two new statistics are derived that adjust Moran's I to study clustering of disease cases in areas (for example, counties) with different, known population densities and consider both spatial pattern and non-binomial variance in rates as evidence supporting disease clusters.
Abstract: I derive two new statistics, Ipop and Ipop*, that adjust Moran's I to study clustering of disease cases in areas (for example, counties) with different, known population densities. A simulation of Lyme disease in Georgia suggests that these new statistics can be more powerful than those currently in use. This is because they consider both spatial pattern and non-binomial variance in rates as evidence supporting disease clusters.

Journal ArticleDOI
TL;DR: The Bayesian decision procedure is described and an application to dose determination in early phase clinical trials is illustrated and a comparison with the continual reassessment method is made.
Abstract: This paper describes the Bayesian decision procedure and illustrates the methodology through an application to dose determination in early phase clinical trials. The situation considered is quite specific: a fixed number of patients are available, to be treated one at a time, with the choice of dose for any patient requiring knowledge of the responses of all previous patients. A continuous range of possible doses is available. The prior beliefs about the dose-response relationship are of a particular form and the gain from investigation is measured in terms of statistical information gathered. How all of these specifications may be varied is discussed. A comparison with the continual reassessment method is made.

Journal ArticleDOI
TL;DR: By using a model with covariables, the effects of factors that influence the onset, progression, and regression of diabetic retinopathy among subjects with insulin-dependent diabetes mellitus are explored.
Abstract: This paper discusses the application of a multi-state model to diabetic retinopathy under the assumption that a continuous time Markov process determines the transition times between disease stages. The multi-state model consists of three transient states that represent the early stages of retinopathy, and one final absorbing state that represent the irreversible stage of retinopathy. By using a model with covariables, we explore the effects of factors that influence the onset, progression, and regression of diabetic retinopathy among subjects with insulin-dependent diabetes mellitus. We can also introduce time-dependent covariables in the model by assuming that the covariables remain constant between two observations. We can also obtain survival-type curves from each stage of the disease and for any combination of patient risk factors.

Journal ArticleDOI
TL;DR: Simulations showed the extended CRM to be superior by making it possible to investigate a greater range of doses using fewer patients, and to provide more accurate estimates.
Abstract: In a phase I clinical trial in cancer patients, the drug involved had one known main adverse effect, which also occurs spontaneously in cancer patients with a fairly high frequency. Experiments in rats have shown marked effects of the drug on tumour growth in high doses, but also dose-dependent toxicity. Consequently, the aim of the study was to determine a dose with a prespecified, acceptable rate of toxicity. As a traditional design could result in inaccurate conclusions, use of the continual reassessment method (CRM) was considered. Twelve dose levels were chosen, allocating to the first patient the lowest, but safe, dose. It is likely that the target dose is far above that, and that CRM then would escalate too fast, skipping certain levels. To ensure that all dose levels inferior to the target dose were tried, some combined methods were proposed: (1) an extension of the design, combining the CRM with a preliminary up-and-down design in order to reach the neighbourhood of the target dose during a successive escalation, and (2) a restriction on the CRM of never escalate more than a single dose level. Simulations showed the extended CRM to be superior by making it possible to investigate a greater range of doses using fewer patients, and to provide more accurate estimates.

Journal ArticleDOI
TL;DR: Recommendations are proposed for the construction and the presentation of CMS, to help authors and investigators to report and choose, respectively, measurement instruments for a complex phenomenon.
Abstract: Composite measurement scales (CMS) are increasingly used in medicine to measure complex phenomena or concepts such as disease risk and severity, physical and psychological functioning and quality of life. To investigate the methodology currently used in the construction of CMS, we examined 46 studies recently published in six major medical and epidemiological journals. Important measurement properties such as measurement level, content and construct validity and reliability are often neglected. Statistical methods, particularly multivariate methods are frequently misused; verifications of model relevance and assumptions, and cross-validations to avoid overfitting are seldom performed. We propose recommendations for the construction and the presentation of CMS, to help authors and investigators to report and choose, respectively, measurement instruments for a complex phenomenon.

Journal ArticleDOI
TL;DR: Comparisons between hospital data and medical charts for acute myocardial infarction and chronic airways obstruction patients showed excellent diagnostic agreement and Contextual information related to the hospitalizations was clinically and epidemiologically realistic.
Abstract: The internal validity of the recording of information about ischaemic heart disease (IHD) and chronic obstructive pulmonary disease (COPD) in the administrative health care datafiles of the Canadian province of Saskatchewan is investigated. Comparisons between hospital data and medical charts for acute myocardial infarction and chronic airways obstruction patients showed excellent diagnostic agreement: 97 per cent and 94 per cent, respectively. Appropriate physician service claims were identified for 89 per cent of hospitalizations for IHD and COPD and exact concordance between diagnoses in the two datafiles varied between 15 per cent for acute/sub-acute IHD and 80 per cent for asthma; including any physician diagnosis within the same broad category (IHD or COPD) increased concordance to 79-94 per cent for IHD and 64-88 per cent for COPD. Contextual information related to the hospitalizations was clinically and epidemiologically realistic.

Journal ArticleDOI
TL;DR: Computer simulation is used to examine the performance of an approach in which the authors matched communities but performed an unmatched analysis, and a variant of this procedure is discussed, in which an unmatchedAnalysis is performed only if the matching 'did not work'.
Abstract: There is considerable interest in community interventions for health promotion, where the community is the experimental unit. Because such interventions are expensive, the number of experimental units (communities) is usually small. Because of the small number of communities involved, investigators often match treatment and control communities on demographic variables before randomization to minimize the possibility of a bad split. Unfortunately, matching has been shown to decrease the power of the design when the number of pairs is small, unless the matching variable is very highly correlated with the outcome variable (in this case, with change in the health behaviour). We used computer simulation to examine the performance of an approach in which we matched communities but performed an unmatched analysis. If the appropriate matching variables are unknown, and there are fewer than ten pairs, an unmatched design and analysis has the most power. If, however, one prefers a matched design, then for N < 10, power can be increased by performing an unmatched analysis of the matched data. We also discuss a variant of this procedure, in which an unmatched analysis is performed only if the matching 'did not work'.

Journal ArticleDOI
TL;DR: This paper reviews some of the main approaches to the analysis of multivariate censored survival data and finds that the mixture methods are surprisingly robust to misspecification of the frailty distribution.
Abstract: This paper reviews some of the main approaches to the analysis of multivariate censored survival data. Such data typically have correlated failure times. The correlation can be a consequence of the observational design, for example with clustered sampling and matching, or it can be a focus of interest as in genetic studies, longitudinal studies of recurrent events and other studies involving multiple measurements. We assume that the correlation between the failure or survival times can be accounted for by fixed or random frailty effects. We then compare the performance of conditional and mixture likelihood approaches to estimating models with these frailty effects in censored bivariate survival data. We find that the mixture methods are surprisingly robust to misspecification of the frailty distribution. The paper also contains an illustrative example on the times to onset of chest pain brought on by three endurance exercise tests during a drug treatment trial of heart patients.