scispace - formally typeset
Search or ask a question

Showing papers by "Stephen G. Pauker published in 2013"



Journal ArticleDOI
TL;DR: Improvements to the EGAPP approach include automated searches for horizon scanning, a quantitative ranking process for topic prioritization, and the development of a staged evidence review and evaluation process.

45 citations


Journal ArticleDOI
TL;DR: The authors go on to suggest that the lack of intuition about the shape of the posttest distributions can lead clinicians to overestimate the effect of small or modest changes in the therapeutic threshold and therefore to believe that thresholds are unstable in that small changes might recruit many more unaffected patients into the empiric treatment arm of the decision tree or may move many more affected Patients into the no-treatment arm.
Abstract: Probabilities provide a measure of the uncertainty in our world, just as a ruler can describe its geometry. We live in at least 4 dimensions and require more complex measuring devices than a ruler to understand it. Similarly, a single probability that something is present now or will happen in the future can sometimes be an inadequate representation of uncertainty. Increasingly, we turn to describing the likelihood of events as probability distributions that have shape and moments, especially when we perform sensitivity analyses. Costeffectiveness acceptability curves have become standard fare. But what about intuition and System I reasoning? When we teach the principles of decision making and logic to students and when we explore them ourselves, we soon discover that it is difficult to provide an intuitive understanding of probabilities. The difficulty increases by orders of magnitude when we focus on probability distributions. In this issue of Medical Decision Making (MDM), Van den Ende and colleagues demonstrate these misperceptions in a small study in which clinicians and students were presented with a scenario of suspected tuberculosis and asked to visualize the post-workup probability distribution. Although the participants proposed a variety of distributions, the vast majority proposed unimodal distributions, whether uniform, symmetric, or skewed; only 2 of 41 participants proposed a bimodal ‘‘U-shaped’’ distribution, which the authors suggest is the correct answer, based on their prior study of suspected tuberculosis in Rwanda. The authors go on to suggest that the lack of intuition about the shape of the posttest distributions can lead clinicians to overestimate the effect of small or modest changes in the therapeutic threshold and therefore to believe that thresholds are unstable in that small changes might recruit many more unaffected patients into the empiric treatment arm of the decision tree or may move many more affected patients into the no-treatment arm. Van den Ende and colleagues provide a second example of a U-shaped bimodal posterior distribution from the classic study by Diamond and others of exercise stress testing in patients who may have coronary disease. Although they posit that such bimodal posterior distributions are common, 2 examples may not establish a general principle. Of course, if such U-shaped posterior distributions were common, then indeed small or even moderate changes in the threshold for empiric treatment in a population might shift relatively few individuals from one strategy to the other (depending on where the threshold was located), making therapeutic thresholds far more stable and perhaps more trustworthy. Although Van den Ende’s work has been in populations, the same principles might apply to our belief in posttesting distributions in individual patients.

40 citations


Journal ArticleDOI
TL;DR: The EWG found insufficient evidence to recommend testing for predictive variants in 28 variants to assess risk for type 2 diabetes in the general population, on the basis of studies in populations of northern European descent, and discouraged clinical use unless further evidence supports improved clinical outcomes.

17 citations


Journal ArticleDOI
TL;DR: This article addressed decision making under uncertainty and valuing outcomes and showed how computers could help physicians with diagnoses and even taught medical workers how to use edge-notched cards to create diagnostic databases.
Abstract: Robert S. Ledley, one of the founding fathers of the field of medical decision making and the man who applied computer science to medical judgment and diagnosis, died July 24, 2012, after a prolonged struggle with Alzheimer’s. Our community, as well as the medical field as a whole, has lost a pioneering genius. Bob Ledley’s legacy will live on through his magnum opus ‘‘Reasoning Foundations of Medical Diagnosis,’’ coauthored with Lee B. Lusted, MD, FACMI. Lee Lusted was one of the founders of the Society for Medical Decision Making and the founding editor of Medical Decision Making. In their seminal work, the authors showed how computers could help physicians with diagnoses and even taught medical workers how to use edge-notched cards to create diagnostic databases. Most important, this article addressed decision making under uncertainty and valuing outcomes. Citation analyses have demonstrated that this paper lies at the root of medical decision science. In fact, many readers may recall their introduction to our discipline through this article. We now explore the rest of his talent for technological innovation. Robert Steven Ledley was born in Flushing Meadows, New York, in 1926. While attending dental school at New York University College (at his parents’ strong behest, so that he could earn a living), he also pursued his interest in physics at Columbia University. He received his DDS in 1948 and his MS in theoretical physics the following year. One of his professors ‘‘joked that he was the only physicist who could pull a man’s tooth.’’ When the Korean War broke out, Ledley volunteered as a First Lieutenant in the US Army Dental Corps; as a result of his background in physics, he was assigned to research prosthetic dental devices at Walter Reed Army Medical Center in Washington, DC. His work optimizing denture fitting based on ‘‘angle of chew’’ earned him an article from the Associated Press titled ‘‘Mathematics Used to Keep False Teeth in Place.’’ After completing his service, Bob served as an investigator at the National Bureau of Standards and Johns Hopkins University and was an associate professor at George Washington University. While at the National Bureau of Standards, Ledley obtained for his wife (the former Terry Wachtell, a music major whom he convinced to switch to mathematics) a position as a programmer on the Standards Eastern Automatic Computer (SEAC). Bob was fascinated and spent a decade learning to program before using the computer to solve complex problems in his own research. He was an ardent believer in the prowess of computational research: ‘‘I had previously realized that although, conceptually, physics equations could be written to describe any biomedical phenomenon, such equations would be so complex that they could not feasibly be solved in closed form. Thus SEAC would be my panacea, because the equations would become tractable to numerical methods of solutions. Or so I truly believed at the time. That was to be my field, application of computers to biomedical problems.’’ It didn’t take long for Ledley to arrive at a natural starting point—computerizing the diagnostic process. He recalled, ‘‘I put together a deck of the McBee keysort cards (the cards with the holes around the margins) for diseases of the tongue. Each card was a disease, and symptoms related to the disease of the card were punched out to the card margin. Then, if needles were pushed through the edge holes From University of Illinois at Chicago, Chicago, IL (SS); Fox Chase Cancer Center, Philadelphia, PA (JRB); and Tufts Medical Center, Boston, MA (SGP).