scispace - formally typeset
Search or ask a question

Showing papers on "False positive paradox published in 1996"


01 Jan 1996
TL;DR: Adleman has solved theHamiltonian path problem by encoding the vertices and edges of the Hamiltonian graph in oligonucleotides of DNA, hybridizing the oligon nucleotides to produce potential answers, and extracting the DNA which corresponds to the Hamiltonia path.
Abstract: Adleman has solved the Hamiltonian path problem by encoding the vertices and edges of the Hamiltonian graph in oligonucleotides of DNA, hybridizing the oligonucleotides to produce potential answers, and extracting the DNA which corresponds to the Hamiltonian path. Depending on the conditions under which the DNA reactions occur, the possibility of false positives, or wrong solutions to the Hamiltonian path problem which appear correct, are possible. This possibility was veri ed by experiment. The primary mechanism for the production of false positives is hybridization stringency that depends on the reaction conditions, of which the most important is temperature. Depending on the temperature, two oligonucleotides can hybridize without exact matching between their base pairs. For DNA-based solutions to combinatorial problems to become a viable and practical technology, the possibility of false positives must be eliminated. This can be accomplished by encoding the vertices and edges of the Hamiltonian graph in DNA oligonucleotides, or codewords, that are a minimum distance, which depends on temperature, from each other. This reliable encoding eliminated the risk of a false positive, which was supported by an experimental trial. The encoding was produced by a genetic algorithm search of the space of possible codewords. The Hamming bound was shown to be an upper bound on the number of vertices that could be encoded in DNA without introducing the possibility of false Hamiltonian paths.

166 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented segmentation and classification results of an automated algorithm for the detection of breast masses on digitized mammograms, where potential mass regions were first identified using density-weighted contrast enhancement (DWCE) segmentation applied to single-view mammograms.
Abstract: This paper presents segmentation and classification results of an automated algorithm for the detection of breast masses on digitized mammograms. Potential mass regions were first identified using density-weighted contrast enhancement (DWCE) segmentation applied to single-view mammograms. Once the potential mass regions had been identified, multiresolution texture features extracted from wavelet coefficients were calculated, and linear discriminant analysis (LDA) was used to classify the regions as breast masses or normal tissue. In this article the overall detection results for two independent sets of 84 mammograms used alternately for training and test were evaluated by free-response receiver operating characteristics (FROC) analysis. The test results indicate that this new algorithm produced approximately 4.4 false positive per image at a true positive detection rate of 90% and 2.3 false positives per image at a true positive rate of 80%.

149 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated the use of appropriateness measurement for identifying dishonest respondents on personality tests and found that the item response theory approach classified a higher number of faking respondents at low rates of misclassification of honest respondents than did a social desirability scale.
Abstract: Research has demonstrated that people can and often do consciously manipulate scores on personality tests. Test constructors have responded by using social desir ability and lying scales in order to identify dishonest re spondents. Unfortunately, these approaches have had limited success. This study evaluated the use of appropri ateness measurement for identifying dishonest respon dents. A dataset was analyzed in which respondents were instructed either to answer honestly or to fake good. The item response theory approach classified a higher number of faking respondents at low rates of misclassification of honest respondents (false positives) than did a social de sirability scale. At higher false positive rates, the social desirability approach did slightly better. Implications for operational testing and suggestions for further research are provided.

148 citations


Proceedings ArticleDOI
TL;DR: In this paper, several methods of automatic video segmentation for the identification of shot transitions have been proposed, but they have not been systematically compared, and they are not systematically compared in terms of the percentage of correct and false identifications.
Abstract: While several methods of automatic video segmentation for the identification of shot transitions have been proposed, they have not been systematically compared. We examine several segmentation techniques across different types of videos. Each of these techniques defines a measure of dissimilarity between successive frames which is then compared to a threshold. Dissimilarity values exceeding the threshold identify shot transitions. The techniques are compared in terms of the percentage of correct and false identifications for various thresholds, their sensitivity to the threshold value, their performance across different types of video, their ability to identify complicated transition effects, and their requirements for computational resources. Finally, the definition of a priori set of values for the threshold parameter is also examined. Most techniques can identify over 90% of the real shot transitions but have a high percentage of false positives. Reducing the false positives was a major challenge, and we introduced a local filtering technique that was fairly effective.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

103 citations


Proceedings Article
01 Jan 1996
TL;DR: An automated procedure for extracting drug-dosage information from clinical narratives is developed and evaluated, with an approximately 80% rate of exact and partial matches' on target phrases, with few false positives and a modest rate of false negatives.
Abstract: We discuss the development and evaluation of an automated procedure for extracting drug-dosage information from clinical narratives. The process was developed rapidly using existing technology and resources, including categories of terms from UMLS96. Evaluations over a large training and smaller test set of medical records demonstrate an approximately 80% rate of exact and partial matches' on target phrases, with few false positives and a modest rate of false negatives. The results suggest a strategy for automating general concept identification in electronic medical records.

57 citations


Book ChapterDOI
01 Jan 1996
TL;DR: Two probabilistic methods — bitstate hashing and hash compaction — have been proposed in the literature that store much fewer bits for each state but come at the price of some probability that not all reachable states will be explored during the search, and that the verifier may thus produce false positives.
Abstract: In verification by explicit state enumeration, for each reachable state of the protocol being verified the full state descriptor is stored in a state table. Two probabilistic methods — bitstate hashing and hash compaction — have been proposed in the literature that store much fewer bits for each state but come at the price of some probability that not all reachable states will be explored during the search, and that the verifier may thus produce false positives. Holzmann introduced bitstate hashing and derived an approximation formula for the average probability that a particular state is not omitted during the search, but this formula does not give a bound on the probability of false positives. In contrast, the analysis for hash compaction, introduced by Wolper and Leroy and improved upon by Stern and Dill, yielded a bound on the probability that not even one state is omitted during the search, thus providing a bound on the probability of false positives.

51 citations


Journal ArticleDOI
TL;DR: Two studies that focus on subjects’ responses to erroneous feedback in a hypothesis testing situation—a variant of Wason’s (1960) 2–4–6 rule discovery task in which some feedback was subject to system error: “hits” were reported as “misses” and vice versa.
Abstract: When evaluating experimental evidence, how do people deal with the possibility that some of the feedback is erroneous? The potential for error means that evidence evaluation must include decisions about when to "trust the data." In this paper we present two studies that focus on subjects' responses to erroneous feedback in a hypothesis testing situation-a variant of Wason's (1960) 2-4-6 rule discovery task in which some feedback was subject to system error: "hits" were reported as "misses" and vice versa. Our results show that, in contrast to previous research, people are equally adept at identifying false negatives and false positives; further, successful subjects were less likely to use a positive test strategy (Klayman & Ha, 1987) than were unsuccessful subjects. Finally, although others have found that generating possible hypotheses prior to experimentation increases success and task efficiency, such a manipulation did little to mitigate the effects of system error.

35 citations


Journal ArticleDOI
TL;DR: It is demonstrated that in practice three‐dimensional cluster analysis has a reasonable balance between sensitivity and the probability of false positives, giving high reproducibility with data on e.g. colour discrimination.
Abstract: We contrast two statistical methods: three-dimensional cluster analysis and statistical parametric mapping. We show that three-dimensional cluster analysis is based on a neurobiological theory of the regulation of blood flow and, unlike statistical parametric mapping, carries a minimum of assumptions that are tested. Statistical parametric mapping is a formal approach, which is based on a multitude of assumptions of which the majority have not been validated. We also demonstrate that in practice three-dimensional cluster analysis has a reasonable balance between sensitivity and the probability of false positives, giving high reproducibility with data on e.g. colour discrimination.

12 citations



Journal ArticleDOI
TL;DR: Automated ST segment analysis with the Marquette® Series 7000 monitoring system demonstrates good diagnostic accuracy, moderate sensitivity, and high specificity, however, clinically significant false negative and false positive rates of ischaemia detection are associated with its use, especially in the postoperative period.
Abstract: A paucity of information exists to validate the accuracy and reliability of ECG monitoring in the operating room or ICU. The purpose of this study was to determine the accuracy, sensitivity, specificity, and predictive values of the Marquette ECG monitor for detection of perioperative myocardial ischaemia (PMI) as measured by ST segment changes in a high risk population. Monitoring for PMI in 28 patients scheduled for aortocoronary bypass surgery was done with the Cardiodata PR® ambulatory continuous electrocardiography (ACECG) monitor lead V5, and compared with lead V5 of the Marquette® Series 7000 ECG/ Surgical operating room monitor, and ECG/Resp ICU monitor. The Marquette lead V5 was evaluated using current criteria for the assessment of diagnostic tests including concordance, sensitivity, specificity, positive and negative predictive values, false positive and false negative rates and compared with the ACECG monitor which served as the reference or “gold standard.” Agreement beyond chance between the two methods was assessed using the Kappa statistic. Of the 53 observation data points, 27 were defined as ischaemic episodes by ACECG. Concordance between lead V5 in each system was 83% (44/53 episodes). Discordance was 17% (9/53 episodes), predominantly in the postbypass interval (77%, 7/9; P = 0.0184). The incidences of false negatives and false positives for Marquette lead V5 was 26% (7/27) and 7.7% ( 2/26), respectively. The sensitivity and specificity of the Marquette was 0.74 and 0.92. Positive predictive value was 0.91, negative predictive value was 0.77, and Kappa statistic was 66%. Automated ST segment analysis with the Marquette® Series 7000 monitoring system demonstrates good diagnostic accuracy, moderate sensitivity, and high specificity. However, clinically significant false negative and false positive rates of ischaemia detection are associated with its use, especially in the postoperative period.

9 citations


Journal ArticleDOI
TL;DR: Clinical evaluations of mammograms indicate the potential of using this clustering algorithm as an effective tool to bring microcalcification areas to the attention of the radiologist during a routine reading session of mammogram.

Proceedings ArticleDOI
16 Apr 1996
TL;DR: Instead of superimposing detected pixels or arrows on the mammogram, this paper adaptively enhance the most suspicious regions according to the weight indicated by the test statistic at the detector output, so that CAD false positives promise to be less obtrusive to the viewer.
Abstract: Computer-aided diagnosis techniques have been proposed as second opinion providers in digital mammography. This paper considers a new method of presenting CAD output to the radiologist. Instead of superimposing detected pixels or arrows on the mammogram, we adaptively enhance the most suspicious regions according to the weight indicated by the test statistic at the detector output. In so doing, CAD false positives promise to be less obtrusive to the viewer, and lesions missed by CAD (false negatives) may still be detected by the radiologist. In our method the entire mammogram is enhanced to some (spatially varying) degree. Enhancement is realized by applying nonlinear operators to wavelet coefficients computed at multiple scales. We combine this technique with the results of our previous wavelets-based CAD algorithm for detecting microcalcifications.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal Article
TL;DR: A valid program of follow-up has always been a crucial point in the overall therapy of the colon-cancer and in this retrospective study, the authors have used as specimen 74 patients put under observation between the years 1987 and 1992 to put under comparison the various controlling methods.
Abstract: A valid program of follow-up has always been a crucial point in the overall therapy of the colon-cancer. In this retrospective study, the authors have used as specimen 74 patients put under observation between the years 1987 and 1992. The patient have been followed throughout the diagnostic period with various methods. It has been the will of the authors, who have presented their protocol of reference, to put under comparison the various controlling methods in order to visualize their reliability, specificity and the indication of each one of them. The CEA is the most sensible haemanalysis for lifting the doubt of recidivation. As for the TAC and ultrasound it has been reserved the job of formulating a correct diagnosis; the results of both diagnostics through imagery have been more or less the same. However, the ultrasound examination have shown more false positives than the TAC. The research of the blood occult in the stool is a rapid and economic detection in the case of intramural recidivations, even if we cannot disregard the share of false positives. A high specificity for the study of intramural recidivations has been offered by the endoscopic scan particularly when associated by a brushing and biopsy.

Proceedings ArticleDOI
16 Apr 1996
TL;DR: A multi-stage system with image processing and artificial neural techniques is developed for detection of microcalcification in digital mammogram images and Experimental results show that this system is able to identify true cluster at an accuracy of 93% with 2.9 false positive microCalcifications per image.
Abstract: A multi-stage system with image processing and artificial neural techniques is developed for detection of microcalcification in digital mammogram images. The system consists of (1) preprocessing stage employing box-rim filtering and global thresholding to enhance object-to- background contrast; (2) preliminary selection stage involving body-part identification, morphological erosion, connected component analysis, and suspect region segmentation to select potential microcalcification candidates; and (3) neural network-based pattern classification stage including feature map extraction, pattern recognition neural network processing, and decision-making neural network architecture for accurate determination of true and false positive microcalcification clusters. Microcalcification suspects are captured and stored in 32 by 32 image blocks, after the first two processing stages. A set of radially sampled pixel values is utilized as the feature map to train the neural nets in order to avoid lengthy training time as well as insufficient representation. The first pattern recognition network is trained to recognize true microcalcification and four categories of false positive regions whereas the second decision network is developed to reduce the detection of false positives, hence to increase the detection accuracy. Experimental results show that this system is able to identify true cluster at an accuracy of 93% with 2.9 false positive microcalcifications per image.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
16 Apr 1996
TL;DR: The HPNN algorithm is able to utilize contextual information for improving microcalcifications detection and potentially reduce the false positive rates in CAD systems.
Abstract: Microcalcifications are important cues used by radiologists for early detection in breast cancer. Individually, microcalcifications are difficult to detect, and often contextual information (e.g. clustering, location relative to ducts) can be exploited to aid in their detection. We have developed an algorithm for constructing a hierarchical pyramid/neural network (HPNN) architecture to automatically learn context information for detection. To test the HPNN we first examined if the hierarchical architecture improves detection of individual microcalcifications and if context is in fact extracted by the network hierarchy. We compared the performance of our hierarchical architecture versus a single neural network receiving input from all resolutions of a feature pyramid. Receiver operator characteristic (ROC) analysis shows that the hierarchical architecture reduces false positives by a factor of two. We examined hidden units at various levels of the processing hierarchy and found what appears to be representations of ductal location. We next investigated the utility of the HPNN if integrated as part of a complete computer-aided diagnosis (CAD) system for microcalcification detection, such as that being developed at the University of Chicago. Using ROC analysis, we tested the HPNN's ability to eliminate false positive regions of interest generated by the computer, comparing its performance to the neural network currently used in the Chicago system. The HPNN achieves an area under the ROC curve of Az equal to .94 and a false positive fraction of FPF equal to .21 at TPF equals 1.0. This is in comparison to the results reported for the Chicago network; Az equal to .91, FPF equal to .43 at TPF equal to 1.0. These differences are statistically significant. We conclude that the HPNN algorithm is able to utilize contextual information for improving microcalcifications detection and potentially reduce the false positive rates in CAD systems.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal Article
TL;DR: Preliminary data show that the stapedial reflex together with the ABR test could be useful for the diagnosis of CFS.
Abstract: The chronic fatigue syndrome (CFS) was formally defined to describe disabling fatigue of unknown etiology with immunologic disfunctions. In most cases occur abnormalities of neurophysiological tests. In this paper the Authors use the low (11 pps) and high (51-71 pps) frequency ABR for detecting the electrophysiological function of auditory brainstem responses and propose the "Prolonged Decay Test", a modified impedenzometric technique that explores any alterations of the stapedial contraction, as a new diagnostic test for CFS. Twenty-one patients with suspected CFS, with an age between 17 and 50 years, were examined and the instrumental data were correlated with the clinical findings. The results of the ABR study showed in the examined subjects no many abnormalities in the 11 pps frequency test. The high frequency stimulation trials (with 51 and 71 pps) proved many alterations in 10 patients (absence of the first wave in 6 cases, in 5 many wave latency delay and in 1 patient absence of the first wave and many wave latency delay). The high frequency trials showed no abnormalities in the 11 remaining patients. The clinical-audiological correlation showed a 61.9% of comparison with 33.3% of false negatives and 4.8% of false positives. The Prolonged Decay Test showed a 71.4% of clinical-audiological comparison with 23.8% of false negatives and 4.8% of false positives. The Prolonged Decay Test together with the ABR showed a 81.8% of clinical-audiological comparison with 18.2% of false negatives and 0% of false positives. These preliminary data show that the stapedial reflex together with the ABR test could be useful for the diagnosis of CFS.

Proceedings ArticleDOI
31 Oct 1996
TL;DR: This work shows how the local curvature image of suspected nodule pixels provides a new description that permits to distinguish true nodules from those false positives in the global detection process.
Abstract: Automatic methods developed for detection of lung nodules in chest radiographs usually present an excessive number of false positive detections. In this work the authors show how the local curvature image of suspected nodule pixels provides a new description that permits to distinguish true nodules from those false positives. A multilayer perceptron network with supervised learning is able to recognize the images of nodule local curvature peaks. The results obtained with a set of 23 chest images each one with at least one nodule show a sensibility in the global detection process of 93% with a mean number of 2 false positives per image.

Journal ArticleDOI
TL;DR: The presence of HIV-2 was investigated in 88 patients (72 men and 16 women) carrying HIV-1 during March-April 1992 and the result was 88 positive findings for HIV- 1 and 2 positive HIV- 2 antibody findings for both HIV,1 and HIV,2 yielding a 2.3% positivity for the sample analyzed.
Abstract: At the present time two distinct viruses are known to cause AIDS the HIV-1 type virus and the HIV-2 type virus. The former was primarily isolated in Europe and the latter was initially restricted to western Africa. In Brazil data on infections caused by HIV-2 are conflicting having been derived from metropolitan and port-city populations. Some authors deny the existence of HIV-2 in the country. In the interior in the city of Ribeirao Preto with a population of 500000 the rate of HIV incidence was 273.5/100000 population during the period 1980-95. Most seropositive patients and AIDS patients are cared for at the Hospital of Clinics of the Medical School of Ribeirao Preto where routine HIV-2 antibody tests are not administered. Therefore the presence of HIV-2 was investigated in 88 patients (72 men and 16 women) carrying HIV-1 during March-April 1992. The HIV-2 antibody test utilized the recombinant antigens of HIV-1 and HIV-2 containing latex particles the Recombigen HIV-1/HIV-2 Rapid Test Device (Cambridge Biotech Corp.). 26 of the 88 sera analyzed were positive for both HIV-1 and HIV-2 yielding a 29.5% (26/88) positivity for both viruses. Subsequently the 88 sera were processed by the Pepti-Lav 1-2 test (Diagnostics Pasteur) which is an immunoenzymatic test detecting reactivity against gp41 and gp36. The result was 88 positive findings for HIV-1 and 2 positive HIV-2 antibody findings for both HIV-1 and HIV-2 yielding a 2.3% positivity for the sample analyzed. Discrepancies between different tests are not rare. The Recombigen use instructions warn that if protein recombinants are used a 25-30% rate of false positives can occur. The Pepti-Lav which detects more specific proteins is more sensitive and specific. The 2 positive HIV-2 sera were also examined by PCR (partition chromatography?) which did not show positivity. This indicates that caution should be exercised when interpreting test results.