scispace - formally typeset
Search or ask a question

Showing papers on "Data quality published in 1987"


Journal ArticleDOI
TL;DR: This work demonstrates the great value of retrospective analysis of encountered data, and argues for renewed attention to archival of data sets with documented data quality, interacalibration and documentation of methodologies, and long-term storage of samples for future analysis.
Abstract: Long-term records of biological data are extremely valuable for documenting ecosystem changes, for differentiating natural changes from those caused by humans, and for generating and analyzing testable hypotheses. Long-term sampling, however, is generally discouraged by a variety of institutional disincentives, so that today such records are uncommon. We discuss approaches for overcoming these disincentives through improved research planning and design, including clearera priori definition of management and regulatory actions and information needs, more rigorous adherence to hypothesis formulation and testing, and proper spatial and temporal scaling in sampling. We distinguish between prospective study design, in which the foregoing elements are essential for coast-effectiveness, and retrospective analysis, which relies on reconstruction of long-term records from existing data sets. We demonstrate the great value of retrospective analysis of encountered data, and argue for renewed attention to archival of data sets with documented data quality, interacalibration and documentation of methodologies, and long-term storage of samples for future analysis. Such practices are essential to ensure the quality of long-term records that are reconstructed for retrospective examination of new hypotheses in the future.

114 citations


Journal ArticleDOI
TL;DR: In conducting data analysis, the investigator must be aware of the potential problems related to the size of the data base, the unit of analysis, and the sampling strategy--particularly if sampling involved stratification or clustering.
Abstract: The growing number of large health data bases available represents a valuable resource for health care research. Many available data bases, however, have subtle and/or complex defects in their design as well as in the quality of the data themselves. The apparent ease and economy of using pre-collected data cannot eliminate the need for careful selection, examination, and analysis of these data. Existing documentation should be critically reviewed to assess the appropriateness of the data base for its intended use. Once in hand, the completeness and coding of the data should be examined in detail before attempting to test hypotheses. In conducting data analysis, the investigator must be aware of the potential problems related to the size of the data base, the unit of analysis, and the sampling strategy--particularly if sampling involved stratification or clustering. Awareness of the potential pitfalls inherent in the use of large health data bases can help prevent many problems and disappointments, as well as improve the validity and efficiency of statistical analysis.

103 citations


Journal ArticleDOI
TL;DR: The paper develops three measurements for data reliability: internal reliability—reflects the “commonly accepted” characteristics of various data items; relative reliability—indicates compliance of data to user requirements; and absolute reliability—determines the level of resemblance of data items to reality.
Abstract: :Information is valuable if it derives from reliable data. However, measurements for data reliability have not been widely established in the area of information systems (is).This paper attempts to draw some concepts of reliability from the field of quality control and to apply them to is. The paper develops three measurements for data reliability: internal reliability—reflects the “commonly accepted” characteristics of various data items; relative reliability—indicates compliance of data to user requirements; and absolute reliability—determines the level of resemblance of data items to reality.The relationships between the three measurements are discussed, and the results of a field study are displayed and analyzed. The results provide some insightful information on the “shape” of the database that was inspected, as well as on the degree of rationality of some user requirements. General conclusions and avenues for future research are suggested.

87 citations


Journal ArticleDOI
TL;DR: This paper used information from a study of 195 children aged 8-9 and 207 adolescents aged 15-16 to assess and compare the quality of data provided by respondents in the two age groups.
Abstract: Primary school children have sometimes been considered too immature, inarticulate, or shy to serve as useful sources of data on their families. We use information from a study of 195 children aged 8-9 and 207 adolescents aged 15-16 to assess and compare the quality of data provided by respondents in the two age groups. Three sources are used: interviewers' ratings of children's behavior during the interview, the quantity of missing data across interview items, and the percentage of agreements between parents and children on 10 relatively objective family characteristics. These three sources all lead to the conclusion that the quality of data is significantly higher for adolescents than for primary school children. However, the quality of data for primary school children is high in absolute terms. We believe these findings are encouraging for other researchers who wish to interview children about their families.

62 citations


Journal ArticleDOI
TL;DR: This paper examines the impact of deficiencies in data quality on the results generated for spreadsheet applications and describes a framework which can be systematically used to determine the relative importance of potential errors in operational and judgmental data.
Abstract: This paper examines the impact of deficiencies in data quality on the results generated for spreadsheet applications. The purpose is to describe a framework which can be systematically used to determine the relative importance of potential errors in operational and judgmental data. Special emphasis is placed on analyzing the implications of deficiencies in data quality on projected spreadsheet results.

49 citations


Journal ArticleDOI
TL;DR: A method of reporting water quality and assessing compliance with targets, based on the Biological Monitoring Working Party score system, is proposed that enables biologists to present operations managers, and other professionals, with quality data from any freshwater habitat in the form of a simple index.

48 citations


Journal ArticleDOI
TL;DR: In this paper, a synthesis of current literature and a statistical analysis of existing ground-water quality data were conducted to determine whether groundwater quality variables: (1) are normally distributed, (2) exhibit seasonal patterns and (3) are correlated in time.
Abstract: The emerging problem of contamination of ground-water resources has created a need for information which can be supplied by properly designed ground-water quality monitoring programs. The effective design of monitoring programs and the subsequent utilization of data obtained depends upon an understanding of the general statistical characteristics of ground-water quality variables. In order to provide some background information on these characteristics, a synthesis of current literature and a statistical analysis of existing ground-water quality data were conducted. Specifically, the purpose of the study was to determine whether ground-water quality variables: (1) are normally distributed, (2) exhibit seasonal patterns, and (3) are correlated in time. The results of the investigation suggest that many ground-water quality variables: are not normally distributed, but have skewed right distributions; can exhibit seasonal fluctuations of various shapes and magnitudes, especially in shallow or highly permeable aquifers; and can exhibit significant serial correlation when samples are collected quarterly.

47 citations


01 Apr 1987
TL;DR: The second phase of an FHWA study, entitled "Pavement Condition Monitoring Methods and Equipment" as mentioned in this paper, selected distress survey methods and devices, representing a range in automation, were tested from July to September 1986.
Abstract: This report documents the second phase of an FHWA study, entitled "Pavement Condition Monitoring Methods and Equipment". In this phase selected distress survey methods and devices, representing a range in automation, were tested from July to September 1986. The following methods and devices were included in the testing: manual mapping, detailed visual surveys using manual recording and automatic data logging, PASCO ROADRECON survey vehicle, the GERPHO survey vehicle, the ARAN survey vehicle, and the Laser RST survey vehicle. The field tests were conducted on flexible, rigid, and composite pavements exhibiting a range of pavement distresses. The survey devices were evaluated using performance, capability, efficiency, and cost-effectiveness criteria. The study concluded that, at present, the GERPHO and PASCO ROADRECON are best suited for pavement performance research studies due to factors such as the permanent film record, cost-effectiveness, and data quality. The GERPHO and PASCO ROADRECON are also judged to be suitable for network and project level surveys. The ARAN and Laser RST were recommended for consideration for use in network level surveys for pavement management. It is also recommended that data loggers be used to record field data for traditional manual survey techniques.

26 citations


01 Apr 1987
TL;DR: The second phase of an FHWA study, entitled "pavement condition monitoring methods and equipment" as discussed by the authors, was conducted from July to September 1986, where the survey devices were evaluated using performance, capability, efficiency, and cost effectiveness criteria.
Abstract: This report documents the second phase of an FHWA study, entitled "pavement condition monitoring methods and equipment". In this phase selected distress survey methods and devices, representing a range in automation, were tested from July to September 1986. The following methods and devices were included in the testing: manual mapping, detailed visual surveys using manual recording and automatic data logging, pasco roadrecon survey vehicle, the gerpho survey vehicle, the aran survey vehicle, and the laser rst survey vehicle. The field tests were conducted on flexible, rigid, and composite pavements exhibiting a range of pavement distresses. The survey devices were evaluated using performance, capability, efficiency, and cost effectiveness criteria. The study concluded that, at present, the gerpho and pasco roadrecon are best suited for pavement performance research studies due to factors such as the permanent film record, cost effectiveness, and data quality. The gerpho and pasco roadrecon are also judged to be suitable for network and project level surveys. The aran and laser rst were recommended for consideration for use in network level surveys for pavement management. It is also recommended that data loggers be used to record field data for traditional manual survey techniques (a).

16 citations


Journal ArticleDOI
TL;DR: Advances in microcomputers and software has led ARAMIS to begin a migration from mainframe computing to distributed systems using IBM PC/XT/AT type computers and the Medlog software system.
Abstract: The American Rheumatism Association Medical Information System (ARAMIS) is a consortium of North American rheumatic disease data banks. Founded in 1974, it has grown to include more than 16 centers, 22,000 patients, 140,000 patient encounters, and 80,000,000 observations. Traditionally, data storage and computer programs have resided on the IBM "2"-370 system at Stanford University. Distant peripheral centers have entered and retrieved data and performed analyses using proprietary long distance telephone networks. With growth, ARAMIS has placed strong emphasis on data quality and epidemiological soundness. "Core" groups at Stanford specifically address issues of quality control, biostatistics, health care economics, outcome assessment, study design, and administration. Advances in microcomputers and software has led ARAMIS to begin a migration from mainframe computing to distributed systems using IBM PC/XT/AT type computers and the Medlog software system. Substantial cost savings have been noted with distributed processing. The ability to easily transfer data and software forms a groundwork for international data banks and data exchange, but common vocabulary and common quality control procedures are essential for effective international cooperation and exchange.

10 citations


Journal ArticleDOI
01 Dec 1987-Poetics
TL;DR: The usefulness of a number of descriptive and statistical methods for analysis are illustrated on the basis of a sample of investigations current at the department of the sociology of literature at Tilburg University.

Journal ArticleDOI
TL;DR: The WATSTORE (Water Data Storage and Retrieval System) and MSIS (Model State Information System) data files of the U.S. EPA containing finished (blended and/or treated) groundwater quality data from 2,137 public water-supply sources are the only two large computer-coded data files on Iowa ground-water quality as mentioned in this paper.
Abstract: Many state and federal agencies routinely collect ground-water quality data to meet a variety of objectives. The WATSTORE (Water Data Storage and Retrieval System) data files of the U.S. Geological Survey containing raw ground-water quality data from 4,388 wells and the MSIS (Model State Information System) data files of the U.S. EPA containing finished (blended and/or treated) groundwater quality data from 2,137 public water-supply sources are the only two large computer-coded data files on Iowa ground-water quality. These two data bases, by themselves, provide significant insights regarding the distribution of chemicals in different hydrogeological environments and the compliance status of PWS sources with reference to drinking-water quality standards in Iowa. With minor modifications, these two data bases can be integrated to: (a) provide statistical, spatial, temporal, and geological summaries of observed ground-water quality; (b) determine the extent of exposed populations to various contaminants in their different concentrations; (c) provide necessary information to reduce the existing list of contaminants and include some of those that are not being monitored now; (d) identify gaps in current data-gathering efforts and prioritize the various components of future monitoring programs; (e) identify potential linkages of the combined use of ground-water quantity and quality, socioeconomic, land-use,more » and public health data; and (f) alert policy makers to potential ground-water quality problems facing the state. Some of the limitations of the two data bases and the need for integrating them is demonstrated by a comparative analysis and evaluation.« less

01 Sep 1987
TL;DR: In this paper, the authors present a plan for monitoring the performance of the SSMI, for validating the derived sea ice parameters, and for providing quality data products before distribution to the research community.
Abstract: This document addresses the task of developing and executing a plan for validating the algorithm used for initial processing of sea ice data from the Special Sensor Microwave/Imager (SSMI). The document outlines a plan for monitoring the performance of the SSMI, for validating the derived sea ice parameters, and for providing quality data products before distribution to the research community. Because of recent advances in the application of passive microwave remote sensing to snow cover on land, the validation of snow algorithms is also addressed.

Journal ArticleDOI
TL;DR: Results are shown of the effectiveness of several compression techniques on data from the plasma wave experiments flown on three scientific satellites, showing that an average telemetry bandwidth saving of a factor of two or three can be achieved without serious degradation.
Abstract: Data compression techniques are frequently discussed for data which are in the form of images Some forms of geophysical data can be considered as matrices, similar to images Results are shown of the effectiveness of several compression techniques on data from the plasma wave experiments flown on three scientific satellites The use of actual data shows that an average telemetry bandwidth saving of a factor of two or three can be achieved without serious degradation This saving can be used to increase the data quality, usually by way of improved resolution (either spectral or spatial), when regions of scientific interest are encountered

Journal Article
TL;DR: The development of a computerized system of primary care clinic returns in the Harare City Health Department Zimbabwe is described and it is envisaged that in time the system will be transferred to a microcomputer enabling greater control by the Health Department and more flexibility of operation.
Abstract: The development of a computerized system of primary care clinic returns in the Harare City Health Department Zimbabwe is described. The computerization involved establishing the data to be collected clearly defining the recording and reporting systems considering the means and method of data validation system and developing the necessary organization for the effective implementation of the system. Some concerns often expressed about computerization in developing countries are commented on; it is suggested that the experience described in this article could serve as a base for applications in similar settings. The primary care clinic returns were chosen to be computerized 1st since these returns were the most problematic with a high degree of underreporting on the disease/conditions consequent poor data quality and underutilization of the available information. At the same time the type of reports required and the recording system were relatively simple and they were representative of the other systems with the result that the computer programs and organizational changes could be replicated with only minor modifications. It is envisaged that in time the system will be transferred to a microcomputer enabling greater control by the Health Department and more flexibility of operation.

Patent
17 Apr 1987
TL;DR: In this paper, the authors proposed a method to realize a highly precise quality evaluation by evaluating a model expression generated from the analyzed result of data by comparing it with the model expression concluded from the previously analyzed result, and selecting and generating the optimum model expression.
Abstract: PURPOSE: To realize a highly precise quality evaluation by evaluating a model expression generated from the analyzed result of data by comparing it with the model expression concluded from the previously analyzed result, and selecting and generating the optimum model expression. CONSTITUTION: The data analysis executing part 103 of a central processing unit 102 analyzes quality data stored in the sample storing part 109 of a memory 106. The data of the analyzed result is extracted by an analyzed result extraction/storage executing part 104, and the extracted data is compared with a reference value stored in an optimum model reference value storing part 108 by an optimum model generating part 105. The data, reaching the reference value, is compared with the optimum model stored in the same storing part 108, and if it is decided that the new data is more effective, the optimum model generating part 105 newly generates the optimum model by the new data and stores it in the storing part 108. Thus, because the optimum model can be always generated, the quality evaluation of a high precision without a time delay can be performed. COPYRIGHT: (C)1988,JPO&Japio

Journal ArticleDOI
TL;DR: Computer automation has expanded the traditional concept of ensuring data quality to include a complex array of interrelated tasks that must be properly managed to achieve the desired results.

01 Jan 1987
TL;DR: In this article, a computational structure which captures general aspects of this type of reasoning is hypothesis assembly in which interacting hypothesis parts are combined to form an overall explanatory hypothesis, where, given observed data, an explanation is found which best accounts for the data.
Abstract: The diagnostic task of relating observed product quality data to operating parameters involves mapping from magnitude and directional changes in product quality attributes to explanatory changes in operating parameters. Working knowledge is typically in the form of individual parameter-product quality relationships. Thus, the predominant diagnostic task is one of assembling an overall hypothesis about changes in operating parameters from relationships which offer pieces of explanatory information. This kind of inferencing is referred to generically as Abductive Inference, where, given observed data, an explanation is found which best accounts for the data. A computational structure which captures general aspects of this type of reasoning is hypothesis assembly in which interacting hypothesis parts are combined to form an overall explanatory hypothesis.

Journal ArticleDOI
TL;DR: In the planning process, chemical measurement must be considered as a system consisting of sampling, calibration, analysis, and data processing, and appropriate quality assurance procedures must be applied to ensure statistical control and evaluation of the data.

Journal ArticleDOI
TL;DR: The results of the NASA Landsat Image Data Quality Analysis (LIDQA) program are reviewed in this paper, where the results indicate that the TM provides excellent imagery that can be used in the form of satellite image maps meeting cartographic standards at scales of 1:100,000 or smaller.

Journal ArticleDOI
TL;DR: This paper reviews some of the recently developed computer methods for checking data quality, and recommends procedures to check the quality of data coding and key punching operations, which should have a widespread application.
Abstract: This paper reviews some of the recently developed computer methods for checking data quality. The work started in the USA and Canada, but a substantial project is now underway at Newcastle based on these methods. It is commonplace today to process very large quantities of data on computers, not only in censuses and surveys, but also in other applications. The object of this paper is to review computerised methods of ensuring good data quality. The procedures divide into three types. First, there are procedures to check the quality of data coding and key punching operations. Secondly, there are theories of logical and arithmetic edits, which can assist both in detecting erroneous records, and also in locating erroneous fields on these erroneous records. Thirdly, there are statistical checks which can be run on the data once it is input. The simultaneous application of these procedures can achieve very high quality at reasonable costs. A popular (in theory at least) method of controlling data quality is the use of double or triple entry, but in any case this clearly does not identify logical and arithmetic errors already existing on the data sheets. Minton (1969) drew attention to the defects in this type of verification procedure and devised a new approach outlined below, based on sampling of the processed data. Fellegi & Holt (1976) were the first to propose the use of theories of logical edits. Their system facilitates the verification of sets of edits, and enables these edits to be used to help to pinpoint errors. This paper will not be concerned with procedures which are not computer-aided, such as interviewer training systems for feedback of error types to field staff, etc. We shall also ignore aspects of survey design and of questionnaire design, which can be computer aided. Some statistician feel that errors in data can often be detected at the analysis stage, so that the data entry phase can be rushed through. We feel that this is a mistaken view, and that the procedures outlined below should have a widespread application.

Journal ArticleDOI
TL;DR: The U.S. Environmental Protection Agency's National Surface Water Survey used laboratory performance evaluations, pilot studies, and measurements of quality a... as discussed by the authors, to estimate the level of confidence associated with the routine sample measurements from the survey.
Abstract: An effective, economical quality assurance program applied to lake monitoring uses a minimum number of quality assurance samples to provide maximum information about overall data quality. The resulting estimates of data quality can be expressed quantitatively as measures of precision, accuracy, representativeness, and comparability, The completeness of a data set can be assessed during the verification process and by an independent review (data audit). For a particular survey of lakes, the number and the types of quality assurance samples needed to evaluate measurement precision and accuracy can be calculated by identifying the components of potential variability in the data during preliminary field investigations (pilot studies) can be used to estimate the level of confidence associated with the routine sample measurements from the survey. The U.S. Environmental Protection Agency's National Surface Water Survey used laboratory performance evaluations, pilot studies, and measurements of quality a...

Journal Article
TL;DR: A training course was specifically developed to show how the routinely collected statistics could be of practical use to the nursing staff and to the area health team in monitoring the effectiveness of certain interventions and in the reduction and control of various diseases.
Abstract: With the advent of computerization in the Harare City Health Department Zimbabwe it was realized that data quality and utilization for the primary care statistics needed to be improved and that using computerization as the cutting edge all the other systems though reasonable could be further improved. Accordingly a strategy involving training increased utilization of data data feedback and quality control was devised. A training course was specifically developed to show how the routinely collected statistics could be of practical use to the nursing staff and to the area health team in monitoring the effectiveness of certain interventions and in the reduction and control of various diseases. The specific objectives of the course were that at the end of the training participants should: be able to define the concepts of raw data and processed data and describe the uses and limitations of both; be able to define epidemiology and know the epidemiological importance of age place and time and be able to measure the progress of some of the programs they are involved in (such as the expanded Program on Immunization) as well as the utilization of some services. They should also be able to suggest how preventive/control programs can be developed for various diseases/conditions in their areas.

Patent
03 Sep 1987
TL;DR: In this article, the quality of a work after maintenance work as to whether or not the quality is maintained at a specified value and monitoring the fluctuation of the conditions of treatment of the work after the maintenance management and maintenance work of manufacturing facilities.
Abstract: PURPOSE:To keep previously set performance for a prolonged term by providing a variation detecting arithmetic means controlling the quality of a work after maintenance work as to whether or not the quality is maintained at a specified value and monitoring the fluctuation of the conditions of treatment of the work after the maintenance management and maintenance work of manufacturing facilities. CONSTITUTION:The current execution day when maintenance work to be examined is executed is retrieved from maintenance experience data 105. Information related to the time of treatment regarding a lot immediately after maintenance work is executed from production experience data 101 on the basis of the information of the current execution day, and all lot numbers up to the current time from the lot are retrieved from older numbers in order of time series. The measured values of the result of treatment of a work corresponding to these lot numbers are retrieved from quality data 102. The satisfaction of specifications specified by quality specified data 103 is judged from of these information, and the presence of the generation of variation is controlled.

Book ChapterDOI
01 Jan 1987
TL;DR: By providing management with real-time access to product quality data, rapid isolation and subsequent rectification of problems occurring on the production line may be achieved.
Abstract: The application of speech technology to automotive quality control is discussed. The use of currently available speech I/O hardware interfaced to low cost microcomputer systems enables the integration of the inspectors’ sensory skills with the data processing capabilities of the computer. The man-machine interface allows the operator to receive inspection prompts from and input responses to a computer based quality system. Information provided to the system concerning the model type under scrutiny allows a model-specific list of checks to be followed. The use of such a system is shown to increase inspector efficiency by avoiding the manual recording of quality information. Furthermore, by providing management with real-time access to product quality data, rapid isolation and subsequent rectification of problems occurring on the production line may be achieved.

Journal ArticleDOI
TL;DR: The Forest Response Program (FRP) as discussed by the authors is a joint research program between the Environmental Protection Agency and the U.S. Forest Service to provide data of known and documented quality using environmental measurement techniques in a state of statistical control.
Abstract: In response to inconclusive scientific evidence indicating that acidic deposition is affecting the growth of forests in the United States, the U.S. Environmental Protection Agency and the U.S. Forest Service have established a joint research program, the Forest Response Program. Quality assurance principles have been implemented within the program to provide data of known and documented quality using environmental measurement techniques in a state of statistical control. This paper describes the Forest Response Program and the innovative approach for assessing data quality in biological research, based on quality control and Quality assurance techniques.

Journal ArticleDOI
01 Jan 1987
TL;DR: Data quality, verification, authentication, and inspection continue to be the most important problems in drug development, where data from one country will be used to obtain marketing approval in another.
Abstract: Commercial requirements and the use of indigenous science for advertising purposes indicate that at least some studies will always be done in each country where medications are marketed. However, drug development is a transnational phenomenon where data from one country will be used to obtain marketing approval in another. Yet local regulatory and scientific requirements preclude the use of one set of data for all countries, which increases the cost of an already expensive process. In the US the FDA has historically expressed concerns about accepting foreign data as the sole basis for approval of an NDA. Despite the timolol case, foreign data are still used infrequently and only on an ad hoc basis. Data quality, verification, authentication, and inspection continue to be the most important problems in this area. Standardized protocols, and strict adherence to them; the use of investigators with a proven track record, ability, and appropriate training; frequent monitoring by United States drug spon...

01 Jan 1987
TL;DR: In this article, a combined program of data and report compilation, application of existing hydrologic techniques, and new research initiatives is suggested to reduce the uncertainty associated with water resource estimates in the Himalaya.
Abstract: Water is a crucial resource for development efforts in the Himalayan region. Both large- and small-scale development projects require a variety of hydrologic data for planning implementation. Absence of long-term records and practical difficulties in maintaining data quality lead to high levels of uncertainty in water resource assessments. Additionally, basic unanswered questions about the hydrology of the hill regions limit the development and application of effective watershed management practices. A combined programme of data and report compilation, application of existing hydrologic techniques, and new research initiatives is suggested to reduce the uncertainty associated with hydrologic estimates in the Himalaya.

Book ChapterDOI
01 Jan 1987
TL;DR: Basic components of a successful clinical trial include a clear, unambiguous protocol which addresses a significant medical question, well-defined conditions for entry of patients on-study, and sample sizes sufficiently large and duration of follow-up adequate to detect treatment effects if they are present.
Abstract: The ultimate objective of any clinical trial is to obtain the correct answer to an important medical question. The science of biostatistics plays an important role in helping to meet this fundamental objective. By properly applying sound statistical principles to clinical research in oncology, one can insure that results of completed studies are valid and convincing to others in the scientific community. To this end, the biomedical literature contains several excellent and thorough discussions regarding the design, execution, and analysis of cancer clinical trials and methods of data acquisition [1–3]. These references draw attention to certain basic components of a successful clinical trial, including: (1) a clear, unambiguous protocol which addresses a significant medical question, (2) well-defined conditions for entry of patients on-study, (3) sample sizes sufficiently large and duration of follow-up adequate to detect treatment effects if they are present, (4) a clear description of treatment regimens and experimental design, (5) explicit definition of endpoints used for efficacy and safety evaluation, (6) patient record forms and data management procedures which enhance data quality, (7) appropriate methodology for data monitoring and statistical analysis to account for incomplete data, and (8) appropriate consideration of ethical issues. While there may be a variety of equally plausible ways to satisfy each of these criteria, their precise specification will depend on the goals of the particular trial, its administrative structure, and the nature of the treatment and disease under study.