scispace - formally typeset
Search or ask a question

Showing papers by "University of Washington published in 2002"


Journal ArticleDOI
13 Dec 2002-Science
TL;DR: Monodisperse samples of silver nanocubes were synthesized in large quantities by reducing silver nitrate with ethylene glycol in the presence of poly(vinyl pyrrolidone) (PVP), characterized by a slightly truncated shape bounded by {100, {110}, and {111} facets.
Abstract: Monodisperse samples of silver nanocubes were synthesized in large quantities by reducing silver nitrate with ethylene glycol in the presence of poly(vinyl pyrrolidone) (PVP). These cubes were single crystals and were characterized by a slightly truncated shape bounded by {100}, {110}, and {111} facets. The presence of PVP and its molar ratio (in terms of repeating unit) relative to silver nitrate both played important roles in determining the geometric shape and size of the product. The silver cubes could serve as sacrificial templates to generate single-crystalline nanoboxes of gold: hollow polyhedra bounded by six {100} and eight {111} facets. Controlling the size, shape, and structure of metal nanoparticles is technologically important because of the strong correlation between these parameters and optical, electrical, and catalytic properties.

5,992 citations


Book
01 Jan 2002
TL;DR: The CLAWPACK software as discussed by the authors is a popular tool for solving high-resolution hyperbolic problems with conservation laws and conservation laws of nonlinear scalar scalar conservation laws.
Abstract: Preface 1. Introduction 2. Conservation laws and differential equations 3. Characteristics and Riemann problems for linear hyperbolic equations 4. Finite-volume methods 5. Introduction to the CLAWPACK software 6. High resolution methods 7. Boundary conditions and ghost cells 8. Convergence, accuracy, and stability 9. Variable-coefficient linear equations 10. Other approaches to high resolution 11. Nonlinear scalar conservation laws 12. Finite-volume methods for nonlinear scalar conservation laws 13. Nonlinear systems of conservation laws 14. Gas dynamics and the Euler equations 15. Finite-volume methods for nonlinear systems 16. Some nonclassical hyperbolic problems 17. Source terms and balance laws 18. Multidimensional hyperbolic problems 19. Multidimensional numerical methods 20. Multidimensional scalar equations 21. Multidimensional systems 22. Elastic waves 23. Finite-volume methods on quadrilateral grids Bibliography Index.

5,791 citations


Journal ArticleDOI
TL;DR: This work presents a simple and efficient implementation of Lloyd's k-means clustering algorithm, which it calls the filtering algorithm, and establishes the practical efficiency of the algorithm's running time.
Abstract: In k-means clustering, we are given a set of n data points in d-dimensional space R/sup d/ and an integer k and the problem is to determine a set of k points in Rd, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's (1982) algorithm. We present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation.

5,288 citations


Journal ArticleDOI
TL;DR: The composition and synthesis of hydrogels, the character of their absorbed water, and permeation of solutes within their swollen matrices are reviewed to identify the most important properties relevant to their biomedical applications.

5,173 citations


Journal ArticleDOI
TL;DR: This work reviews a general methodology for model-based clustering that provides a principled statistical approach to important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled.
Abstract: Cluster analysis is the automated search for groups of related observations in a dataset. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures, and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. We review a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, minefield detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology and discuss recent development...

4,123 citations



Journal ArticleDOI
TL;DR: The Multi-Ethnic Study of Atherosclerosis was initiated in July 2000 to investigate the prevalence, correlates, and progression of subclinical cardiovascular disease (CVD) in a population-based sample of 6,500 men and women aged 45-84 years for identification and characterization of CVD events.
Abstract: The Multi-Ethnic Study of Atherosclerosis was initiated in July 2000 to investigate the prevalence, correlates, and progression of subclinical cardiovascular disease (CVD) in a population-based sample of 6,500 men and women aged 45-84 years. The cohort will be selected from six US field centers. Approximately 38% of the cohort will be White, 28% African-American, 23% Hispanic, and 11% Asian (of Chinese descent). Baseline measurements will include measurement of coronary calcium using computed tomography; measurement of ventricular mass and function using cardiac magnetic resonance imaging; measurement of flow-mediated brachial artery endothelial vasodilation, carotid intimal-medial wall thickness, and distensibility of the carotid arteries using ultrasonography; measurement of peripheral vascular disease using ankle and brachial blood pressures; electrocardiography; and assessments of microalbuminuria, standard CVD risk factors, sociodemographic factors, life habits, and psychosocial factors. Blood samples will be assayed for putative biochemical risk factors and stored for use in nested case-control studies. DNA will be extracted and lymphocytes will be immortalized for genetic studies. Measurement of selected subclinical disease indicators and risk factors will be repeated for the study of progression over 7 years. Participants will be followed through 2008 for identification and characterization of CVD events, including acute myocardial infarction and other coronary heart disease, stroke, peripheral vascular disease, and congestive heart failure; therapeutic interventions for CVD; and mortality.

3,367 citations


Book
23 Sep 2002
TL;DR: In this paper, a review of topology, linear algebra, algebraic geometry, and differential equations is presented, along with an overview of the de Rham Theorem and its application in calculus.
Abstract: Preface.- 1 Smooth Manifolds.- 2 Smooth Maps.- 3 Tangent Vectors.- 4 Submersions, Immersions, and Embeddings.- 5 Submanifolds.- 6 Sard's Theorem.- 7 Lie Groups.- 8 Vector Fields.- 9 Integral Curves and Flows.- 10 Vector Bundles.- 11 The Cotangent Bundle.- 12 Tensors.- 13 Riemannian Metrics.- 14 Differential Forms.- 15 Orientations.- 16 Integration on Manifolds.- 17 De Rham Cohomology.- 18 The de Rham Theorem.- 19 Distributions and Foliations.- 20 The Exponential Map.- 21 Quotient Manifolds.- 22 Symplectic Manifolds.- Appendix A: Review of Topology.- Appendix B: Review of Linear Algebra.- Appendix C: Review of Calculus.- Appendix D: Review of Differential Equations.- References.- Notation Index.- Subject Index

3,051 citations


Journal ArticleDOI
TL;DR: Fundamental properties of conditional value-at-risk are derived for loss distributions in finance that can involve discreetness and provides optimization shortcuts which, through linear programming techniques, make practical many large-scale calculations that could otherwise be out of reach.
Abstract: Fundamental properties of conditional value-at-risk (CVaR), as a measure of risk with significant advantages over value-at-risk (VaR), are derived for loss distributions in finance that can involve discreetness. Such distributions are of particular importance in applications because of the prevalence of models based on scenarios and finite sampling. CVaR is able to quantify dangers beyond VaR and moreover it is coherent. It provides optimization short-cuts which, through linear programming techniques, make practical many large-scale calculations that could otherwise be out of reach. The numerical efficiency and stability of such calculations, shown in several case studies, are illustrated further with an example of index tracking.

3,010 citations


Journal ArticleDOI
Q. R. Ahmad1, R. C. Allen2, T. C. Andersen3, J. D. Anglin4  +202 moreInstitutions (18)
TL;DR: Observations of neutral-current nu interactions on deuterium in the Sudbury Neutrino Observatory are reported, providing strong evidence for solar nu(e) flavor transformation.
Abstract: Observations of neutral-current nu interactions on deuterium in the Sudbury Neutrino Observatory are reported. Using the neutral current (NC), elastic scattering, and charged current reactions and assuming the standard 8B shape, the nu(e) component of the 8B solar flux is phis(e) = 1.76(+0.05)(-0.05)(stat)(+0.09)(-0.09)(syst) x 10(6) cm(-2) s(-1) for a kinetic energy threshold of 5 MeV. The non-nu(e) component is phi(mu)(tau) = 3.41(+0.45)(-0.45)(stat)(+0.48)(-0.45)(syst) x 10(6) cm(-2) s(-1), 5.3sigma greater than zero, providing strong evidence for solar nu(e) flavor transformation. The total flux measured with the NC reaction is phi(NC) = 5.09(+0.44)(-0.43)(stat)(+0.46)(-0.43)(syst) x 10(6) cm(-2) s(-1), consistent with solar models.

2,732 citations


Journal ArticleDOI
TL;DR: The Pacific Decadal Oscillation (PDO) has been described by some as a long-lived El Nino-like pattern of Pacific climate variability, and by others as a blend of two sometimes independent modes having distinct spatial and temporal characteristics of North Pacific sea surface temperature (SST) variability as discussed by the authors.
Abstract: The Pacific Decadal Oscillation (PDO) has been described by some as a long-lived El Nino-like pattern of Pacific climate variability, and by others as a blend of two sometimes independent modes having distinct spatial and temporal characteristics of North Pacific sea surface temperature (SST) variability. A growing body of evidence highlights a strong tendency for PDO impacts in the Southern Hemisphere, with important surface climate anomalies over the mid-latitude South Pacific Ocean, Australia and South America. Several independent studies find evidence for just two full PDO cycles in the past century: “cool” PDO regimes prevailed from 1890–1924 and again from 1947–1976, while “warm” PDO regimes dominated from 1925–1946 and from 1977 through (at least) the mid-1990's. Interdecadal changes in Pacific climate have widespread impacts on natural systems, including water resources in the Americas and many marine fisheries in the North Pacific. Tree-ring and Pacific coral based climate reconstructions suggest that PDO variations—at a range of varying time scales—can be traced back to at least 1600, although there are important differences between different proxy reconstructions. While 20th Century PDO fluctuations were most energetic in two general periodicities—one from 15-to-25 years, and the other from 50-to-70 years—the mechanisms causing PDO variability remain unclear. To date, there is little in the way of observational evidence to support a mid-latitude coupled air-sea interaction for PDO, though there are several well-understood mechanisms that promote multi-year persistence in North Pacific upper ocean temperature anomalies.

Journal ArticleDOI
23 May 2002-Nature
TL;DR: Comprehensive protein–protein interaction maps promise to reveal many aspects of the complex regulatory network underlying cellular function and are compared with each other and with a reference set of previously reported protein interactions.
Abstract: Comprehensive protein protein interaction maps promise to reveal many aspects of the complex regulatory network underlying cellular function. Recently, large-scale approaches have predicted many new protein interactions in yeast. To measure their accuracy and potential as well as to identify biases, strengths and weaknesses, we compare the methods with each other and with a reference set of previously reported protein interactions.

Journal ArticleDOI
TL;DR: The Sloan Digital Sky Survey (SDSS) is an imaging and spectroscopic survey that will eventually cover approximately one-quarter of the celestial sphere and collect spectra of ≈106 galaxies, 100,000 quasars, 30,000 stars, and 30, 000 serendipity targets as discussed by the authors.
Abstract: The Sloan Digital Sky Survey (SDSS) is an imaging and spectroscopic survey that will eventually cover approximately one-quarter of the celestial sphere and collect spectra of ≈106 galaxies, 100,000 quasars, 30,000 stars, and 30,000 serendipity targets. In 2001 June, the SDSS released to the general astronomical community its early data release, roughly 462 deg2 of imaging data including almost 14 million detected objects and 54,008 follow-up spectra. The imaging data were collected in drift-scan mode in five bandpasses (u, g, r, i, and z); our 95% completeness limits for stars are 22.0, 22.2, 22.2, 21.3, and 20.5, respectively. The photometric calibration is reproducible to 5%, 3%, 3%, 3%, and 5%, respectively. The spectra are flux- and wavelength-calibrated, with 4096 pixels from 3800 to 9200 A at R ≈ 1800. We present the means by which these data are distributed to the astronomical community, descriptions of the hardware used to obtain the data, the software used for processing the data, the measured quantities for each observed object, and an overview of the properties of this data set.

Journal ArticleDOI
TL;DR: The increase in the plasma ghrelin level with diet-induced weight loss is consistent with the hypothesis that gh Relin has a role in the long-term regulation of body weight.
Abstract: Background Weight loss causes changes in appetite and energy expenditure that promote weight regain. Ghrelin is a hormone that increases food intake in rodents and humans. If circulating ghrelin participates in the adaptive response to weight loss, its levels should rise with dieting. Because ghrelin is produced primarily by the stomach, weight loss after gastric bypass surgery may be accompanied by impaired ghrelin secretion. Methods We determined the 24-hour plasma ghrelin profiles, body composition, insulin levels, leptin levels, and insulin sensitivity in 13 obese subjects before and after a six-month dietary program for weight loss. The 24-hour ghrelin profiles were also determined in 5 subjects who had lost weight after gastric bypass and 10 normal-weight controls; 5 of the 13 obese subjects who participated in the dietary program were matched to the subjects in the gastric-bypass group and served as obese controls. Results Plasma ghrelin levels rose sharply shortly before and fell shortly after eve...

Proceedings ArticleDOI
03 Jun 2002
TL;DR: This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system, and proposes three order encoding methods that can be used to represent XML order in the relational data model, and also proposes algorithms for translating ordered XPath expressions into SQL using these encoding methods.
Abstract: XML is quickly becoming the de facto standard for data exchange over the Internet. This is creating a new set of data management requirements involving XML, such as the need to store and query XML documents. Researchers have proposed using relational database systems to satisfy these requirements by devising ways to "shred" XML documents into relations, and translate XML queries into SQL queries over these relations. However, a key issue with such an approach, which has largely been ignored in the research literature, is how (and whether) the ordered XML data model can be efficiently supported by the unordered relational data model. This paper shows that XML's ordered data model can indeed be efficiently supported by a relational database system. This is accomplished by encoding order as a data value. We propose three order encoding methods that can be used to represent XML order in the relational data model, and also propose algorithms for translating ordered XPath expressions into SQL using these encoding methods. Finally, we report the results of an experimental study that investigates the performance of the proposed order encoding methods on a workload of ordered XML queries and updates.

Journal ArticleDOI
TL;DR: New clinical assessment methods incorporating dual-task paradigms are helpful in revealing the effect of disease on the ability to allocate attention to postural tasks and appear to be sensitive measures in both predicting fall risk and in documenting recovery of stability.

Journal ArticleDOI
11 Dec 2002-JAMA
TL;DR: The IMPACT collaborative care model appears to be feasible and significantly more effective than usual care for depression in a wide range of primary care practices.
Abstract: ContextFew depressed older adults receive effective treatment in primary care settings.ObjectiveTo determine the effectiveness of the Improving Mood–Promoting Access to Collaborative Treatment (IMPACT) collaborative care management program for late-life depression.DesignRandomized controlled trial with recruitment from July 1999 to August 2001.SettingEighteen primary care clinics from 8 health care organizations in 5 states.ParticipantsA total of 1801 patients aged 60 years or older with major depression (17%), dysthymic disorder (30%), or both (53%).InterventionPatients were randomly assigned to the IMPACT intervention (n = 906) or to usual care (n = 895). Intervention patients had access for up to 12 months to a depression care manager who was supervised by a psychiatrist and a primary care expert and who offered education, care management, and support of antidepressant management by the patient's primary care physician or a brief psychotherapy for depresssion, Problem Solving Treatment in Primary Care.Main Outcome MeasuresAssessments at baseline and at 3, 6, and 12 months for depression, depression treatments, satisfaction with care, functional impairment, and quality of life.ResultsAt 12 months, 45% of intervention patients had a 50% or greater reduction in depressive symptoms from baseline compared with 19% of usual care participants (odds ratio [OR], 3.45; 95% confidence interval [CI], 2.71-4.38; P<.001). Intervention patients also experienced greater rates of depression treatment (OR, 2.98; 95% CI, 2.34-3.79; P<.001), more satisfaction with depression care (OR, 3.38; 95% CI, 2.66-4.30; P<.001), lower depression severity (range, 0-4; between-group difference, −0.4; 95% CI, −0.46 to −0.33; P<.001), less functional impairment (range, 0-10; between-group difference, −0.91; 95% CI, −1.19 to −0.64; P<.001), and greater quality of life (range, 0-10; between-group difference, 0.56; 95% CI, 0.32-0.79; P<.001) than participants assigned to the usual care group.ConclusionThe IMPACT collaborative care model appears to be feasible and significantly more effective than usual care for depression in a wide range of primary care practices.

BookDOI
08 Jul 2002
TL;DR: This book discusses the design of Diagnostic Accuracy Studies, the construction of a Smooth ROC Curve, and how to select a Sampling Plan for Readers based on Sensitivity and Specificity.
Abstract: Preface. Acknowledgments. 1. Introduction. 1.1 Why This Book? 1.2 What Is Diagnostic Accuracy? 1.3 Landmarks in Statistical Methods for Diagnostic Medicine. 1.4 Software. 1.5 Topics not Covered in This Book. 1.6 Summary. I BASIC CONCEPTS AND METHODS. 2. Measures of Diagnostic Accuracy. 2.1 Sensitivity and Specificity. 2.2 The Combined Measures of Sensitivity and Specificity. 2.3 The ROC Curve. 2.4 The Area Under the ROC Curve. 2.5 The Sensitivity at a Fixed FPR. 2.6 The Partial Area Under the ROC Curve. 2.7 Likelihood Ratios. 2.8 Other ROC Curve Indices. 2.9 The Localization and Detection of Multiple Abnormalities. 2.10 Interpretation of Diagnostic Tests. 2.11 Optimal Decision Threshold on the ROC Curve. 2.12 Multiple Tests. 3. The Design of Diagnostic Accuracy Studies. 3.1 Determining the Objective of the Study. 3.2 Identifying the Target Patient Population. 3.3 Selecting a Sampling Plan for Patients. 3.3.1 Phase I: Exploratory Studies. 3.3.2 Phase II: Challenge Studies. 3.3.3 Phase III: Clinical Studies. 3.4 Selecting the Gold Standard. 3.5 Choosing a Measure of Accuracy. 3.6 Identifying the Target Reader Population. 3.7 Selecting a Sampling Plan for Readers. 3.8 Planning the Data Collection. 3.8.1 Format for the Test Results. 3.8.2 Data Collection for the Reader Studies. 3.8.3 Reader Training. 3.9 Planning the Data Analyses. 3.9.1 Statistical Hypotheses. 3.9.2 Reporting the Test Results. 3.10 Determining the Sample Size. 4. Estimation and Hypothesis Testing in a Single Sample. 4.1 Binary Scale Data. 4.1.1 Sensitivity and Specificity. 4.1.2 The Sensitivity and Specificity of Clustered Binary Data. 4.1.3 The Likelihood Ratio (LR). 4.1.4 The Odds Ratio. 4.2 Ordinal Scale Data. 4.2.1 The Empirical ROC Curve. 4.2.2 Fitting a Smooth Curve (Parametric Model). 4.2.3 Estimation of Sensitivity at a Particular FPR. 4.2.4 The Area and Partial Area Under the ROC Curve (Parametric Model). 4.2.5 The Area Under the Curve (Nonparametric Method). 4.2.6 Nonparametric Analysis of Clustered Data. 4.2.7 The Degenerate Data. 4.2.8 Choosing Between Parametric and Nonparametric Methods. 4.3 Continuous Scale Data. 4.3.1 The Empirical ROC Curve. 4.3.2 Fitting a Smooth ROC Curve (Parametric and Nonparametric Methods). 4.3.3 Area Under the ROC Curve (Parametric and Nonparametric). 4.3.4 Fixed FPR The Sensitivity and Decision Threshold. 4.3.5 Choosing the Optimal Operating Point. 4.3.6 Choosing Between Parametric and Nonparametric Techniques. 4.4 Hypothesis Testing About the ROC Area. 5. Comparing the Accuracy of Two Diagnostic Tests. 5.1 Binary Scale Data. 5.1.1 Sensitivity and Specificity. 5.1.2 Sensitivity and Specificity of Clustered Binary Data. 5.2 Ordinal and Continuous Scale Data. 5.2.1 Determining the Equality of Two ROC Curves. 5.2.2 Comparing ROC Curves at a Particular Point. 5.2.3 Determining the Range of FPR for Which TPR Differ. 5.2.4 A Comparison of the Area or Partial Area. 5.3 Tests of Equivalence. 6. Sample Size Calculation. 6.1 The Sample Size for Accuracy Studies of a Single Test. 6.1.1 Sensitivity and Specificity. 6.1.2 The Area Under the ROC Curve. 6.1.3 The Sensitivity at a Fixed FPR. 6.1.4 The Partial Area Under the ROC Curve. 6.2 The Sample Size for the Accuracy of Two Tests. 6.2.1 Sensitivity and Specificity. 6.2.2 The Area Under the ROC Curve. 6.2.3 The Sensitivity at a Fixed FPR. 6.2.4 The Partial Area Under the ROC Curve. 6.3 The Sample Size for Equivalent Studies of Two Tests. 6.4 The Sample Size for Determining a Suitable Cutoff Value. 7. Issues in Meta Analysis for Diagnostic Tests. 7.1 Objectives. 7.2 Retrieval of the Literature. 7.3 Inclusion Exclusion Criteria. 7.4 Extracting Information From the Literature. 7.5 Statistical Analysis. 7.6 Public Presentation. II ADVANCED METHODS. 8. Regression Analysis for Independent ROC Data. 8.1 Four Clinical Studies. 8.1.1 Surgical Lesion in a Carotid Vessel Example. 8.1.2 Pancreatic Cancer Exampl. 8.1.3 Adult Obesity Example. 8.1.4 Staging of Prostate Cancer Example. 8.2 Regression Models for Continuous Scale Tests. 8.2.1 Indirect Regression Models for Smooth ROC Curves. 8.2.2 Direct Regression Models for Smooth ROC Curves. 8.2.3 MRA Use for Surgical Lesion Detection in the Carotid Vessel. 8.2.4 Biomarkers for the Detection of Pancreatic Cancer. 8.2.5 Prediction of Adult Obesity by Using Childhood BMI Measurements. 8.3 Regression Models for Ordinal Scale Tests. 8.3.1 Indirect Regression Models for Latent Smooth ROC Curves. 8.3.2 Direct Regression Model for Latent Smooth ROC Curves. 8.3.3 Detection of Periprostatic Invasion With US. 9. Analysis of Correlated ROC Data. 9.1 Studies With Multiple Test Measurements of the Same Patient. 9.1.1 Indirect Regression Models for Ordinal Scale Tests. 9.1.2 Neonatal Examination Example. 9.1.3 Direct Regression Models for Continuous Scale Tests. 9.2 Studies With Multiple Readers and Tests. 9.2.1 A Mixed Effects ANOVA Model for Summary Measures of Diagnostic Accuracy. 9.2.2 Detection of TAD Example. 9.2.3 The Mixed Effects ANOVA Model for Jackknife Pseudovalues. 9.2.4 Neonatal Examination Example. 9.2.5 A Bootstrap Method. 9.3 Sample Size Calculation for Multireader Studies. 10. Methods for Correcting Verification Bias. 10.1 A Single Binary Scale Test. 10.1.1 Correction Methods With the MAR Assumption. 10.1.2 Correction Methods Without the MAR Assumption. 10.1.3 Hepatic Scintigraph Example. 10.2 Correlated Binary Scale Tests. 10.2.1 An ML Approach Without Covariates. 10.2.2 An ML Approach With Covariates. 10.2.3 Screening Tests for Dementia Disorder Example. 10.3 A Single Ordinal Scale Test. 10.3.1 An ML Approach Without Covariates. 10.3.2 Fever of Uncertain Origin Example. 10.3.3 An ML Approach With Covariates. 10.3.4 Screening Test for Dementia Disorder Example. 10.4 Correlated Ordinal Scale Tests. 10.4.1 The Weighted GEE Approach for Latent Smooth ROC Curves. 10.4.2 A Likelihood Based Approach for ROC Areas. 10.4.3 Use of CT and MRI for Staging Pancreatic Cancer Example. 11. Methods for Correcting Imperfect Standard Bias. 11.1 One Single Test in a Single Population. 11.1.1 Hypothetical and Strongyloides Infection Examples. 11.2 One Single Test in G Populations. 11.2.1 Tuberculosis Example. 11.3 Multiple Tests in One Single Population. 11.3.1 MLEs Under the CIA. 11.3.2 Assessment of Pleural Thickening Example. 11.3.3 ML Approaches Without the CIA. 11.3.4 Bioassays for HIV Example. 11.4 Multiple Binary Tests in G Populations. 11.4.1 ML Approaches Under the CIA. 11.4.2 ML Approaches Without the CIA. 12. Statistical Methods for Meta Analysis. 12.1 Sensitivity and Specificity Pairs. 12.1.1 One Common SROC Curve. 12.1.2 Study Specific SROC Curve. 12.1.3 Evaluation of Duplex Ultrasonography, With and Without Color Guidance. 12.2 ROC Curve Areas. 12.2.1 Fixed Effects Models. 12.2.2 Random Effects Models. 12.2.3 Evaluation of the Dexamethasone Suppression.Test. Index.

Journal ArticleDOI
TL;DR: The primary aim was to compare presenting clinical features and liver transplantation in patients with acute liver failure related to acetaminophen hepatotoxicity, other drugs, indeterminate factors, and other causes.
Abstract: Acetaminophen overdose and idiosyncratic drug reactions have replaced viral hepatitis as the most frequent causes of acute liver failure. The cause of liver failure and coma grade at admission were...

Journal ArticleDOI
TL;DR: The prevalence of burn out among internal medicine residents in a single university-based program is evaluated and the relationship of burnout to self-reported patient care practices is evaluated.
Abstract: In this study, burnout was common among resident physicians and was associated with self-reported suboptimal patient care practices.

Journal ArticleDOI
TL;DR: In this article, the authors describe the algorithm that selects the main sample of galaxies for spectroscopy in the Sloan Digital Sky Survey (SDSS) from the photometric data obtained by the imaging survey.
Abstract: We describe the algorithm that selects the main sample of galaxies for spectroscopy in the Sloan Digital Sky Survey (SDSS) from the photometric data obtained by the imaging survey. Galaxy photometric properties are measured using the Petrosian magnitude system, which measures flux in apertures determined by the shape of the surface brightness profile. The metric aperture used is essentially independent of cosmological surface brightness dimming, foreground extinction, sky brightness, and the galaxy central surface brightness. The main galaxy sample consists of galaxies with r-band Petrosian magnitudes r ≤ 17.77 and r-band Petrosian half-light surface brightnesses μ50 ≤ 24.5 mag arcsec-2. These cuts select about 90 galaxy targets per square degree, with a median redshift of 0.104. We carry out a number of tests to show that (1) our star-galaxy separation criterion is effective at eliminating nearly all stellar contamination while removing almost no genuine galaxies, (2) the fraction of galaxies eliminated by our surface brightness cut is very small (~0.1%), (3) the completeness of the sample is high, exceeding 99%, and (4) the reproducibility of target selection based on repeated imaging scans is consistent with the expected random photometric errors. The main cause of incompleteness is blending with saturated stars, which becomes more significant for brighter, larger galaxies. The SDSS spectra are of high enough signal-to-noise ratio (S/N > 4 per pixel) that essentially all targeted galaxies (99.9%) yield a reliable redshift (i.e., with statistical error less than 30 km s-1). About 6% of galaxies that satisfy the selection criteria are not observed because they have a companion closer than the 55'' minimum separation of spectroscopic fibers, but these galaxies can be accounted for in statistical analyses of clustering or galaxy properties. The uniformity and completeness of the galaxy sample make it ideal for studies of large-scale structure and the characteristics of the galaxy population in the local universe.

Journal ArticleDOI
03 Jul 2002-JAMA
TL;DR: Lower rates of CHD events among women in the hormone group in the final years of HERS did not persist during additional years of follow-up, and hormone therapy did not reduce risk of cardiovascular events in women with CHD.
Abstract: 1.22); HERS II, 1.00 (95% CI, 0.77-1.29); and overall, 0.99 (0.84-1.17). The overall RHs were similar after adjustment for potential confounders and differential use of statins between treatment groups (RH, 0.97; 95% CI, 0.82-1.14), and in analyses restricted to women who were adherent to randomized treatment assignment (RH, 0.96; 95% CI, 0.77-1.19). Conclusions Lower rates of CHD events among women in the hormone group in the final years of HERS did not persist during additional years of follow-up. After 6.8 years, hormone therapy did not reduce risk of cardiovascular events in women with CHD. Postmenopausal hormone therapy should not be used to reduce risk for CHD events in women with CHD.

Journal ArticleDOI
25 Sep 2002-JAMA
TL;DR: These are the first population-based estimates for neuropsychiatric symptoms in MCI, indicating a high prevalence associated with this condition as well.
Abstract: ContextMild cognitive impairment (MCI) may be a precursor to dementia, at least in some cases. Dementia and MCI are associated with neuropsychiatric symptoms in clinical samples. Only 2 population-based studies exist of the prevalence of these symptoms in dementia, and none exist for MCI.ObjectiveTo estimate the prevalence of neuropsychiatric symptoms in dementia and MCI in a population-based study.DesignCross-sectional study derived from the Cardiovascular Health Study, a longitudinal cohort study.Setting and ParticipantsA total of 3608 participants were cognitively evaluated using data collected longitudinally over 10 years and additional data collected in 1999-2000 in 4 US counties. Dementia and MCI were classified using clinical criteria and adjudicated by committee review by expert neurologists and psychiatrists. A total of 824 individuals completed the Neuropsychiatric Inventory (NPI); 362 were classified as having dementia, 320 as having MCI; and 142 did not meet criteria for MCI or dementia.Main Outcome MeasurePrevalence of neuropsychiatric symptoms, based on ratings on the NPI in the previous month and from the onset of cognitive symptoms.ResultsOf the 682 individuals with dementia or MCI, 43% of MCI participants (n = 138) exhibited neuropsychiatric symptoms in the previous month (29% rated as clinically significant) with depression (20%), apathy (15%), and irritability (15%) being most common. Among the dementia participants, 75% (n = 270) had exhibited a neuropsychiatric symptom in the past month (62% were clinically significant); 55% (n = 199) reported 2 or more and 44% (n = 159) 3 or more disturbances in the past month. In participants with dementia, the most frequent disturbances were apathy (36%), depression (32%), and agitation/aggression (30%). Eighty percent of dementia participants (n = 233) and 50% of MCI participants (n = 139) exhibited at least 1 NPI symptom from the onset of cognitive symptoms. There were no differences in prevalence of neuropsychiatric symptoms between participants with Alzheimer-type dementia and those with other dementias, with the exception of aberrant motor behavior, which was more frequent in Alzheimer-type dementia (5.4% vs 1%; P = .02).ConclusionsNeuropsychiatric symptoms occur in the majority of persons with dementia over the course of the disease. These are the first population-based estimates for neuropsychiatric symptoms in MCI, indicating a high prevalence associated with this condition as well. These symptoms have serious adverse consequences and should be inquired about and treated as necessary. Study of neuropsychiatric symptoms in the context of dementia may improve our understanding of brain-behavior relationships.

Proceedings ArticleDOI
23 Jul 2002
TL;DR: This research optimize the amount of marketing funds spent on each customer, rather than just making a binary decision on whether to market to him, and takes into account the fact that knowledge of the network is partial, and that gathering that knowledge can itself have a cost.
Abstract: Viral marketing takes advantage of networks of influence among customers to inexpensively achieve large changes in behavior. Our research seeks to put it on a firmer footing by mining these networks from data, building probabilistic models of them, and using these models to choose the best viral marketing plan. Knowledge-sharing sites, where customers review products and advise each other, are a fertile source for this type of data mining. In this paper we extend our previous techniques, achieving a large reduction in computational cost, and apply them to data from a knowledge-sharing site. We optimize the amount of marketing funds spent on each customer, rather than just making a binary decision on whether to market to him. We take into account the fact that knowledge of the network is partial, and that gathering that knowledge can itself have a cost. Our results show the robustness and utility of our approach.

Journal ArticleDOI
TL;DR: The hardware aspects of reconfigurable computing machines, from single chip architectures to multi-chip systems, including internal structures and external coupling are explored, and the software that targets these machines is focused on.
Abstract: Due to its potential to greatly accelerate a wide variety of applications, reconfigurable computing has become a subject of a great deal of research. Its key feature is the ability to perform computations in hardware to increase performance, while retaining much of the flexibility of a software solution. In this survey, we explore the hardware aspects of reconfigurable computing machines, from single chip architectures to multi-chip systems, including internal structures and external coupling. We also focus on the software that targets these machines, such as compilation tools that map high-level algorithms directly to the reconfigurable substrate. Finally, we consider the issues involved in run-time reconfigurable systems, which reuse the configurable hardware during program execution.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the use of principles from disturbance ecology and natural stand development to create silvicultural approaches that are more aligned with natural processes, including the role of disturbances in creating structural legacies that become key elements of the post-disturbance stands.

Journal ArticleDOI
TL;DR: The paradoxical finding that longitudinal age trends were steeper than cross-sectional trends suggests that incident poor health may accelerate the age-related decline in androgen levels.
Abstract: We used longitudinal data from the Massachusetts Male Aging Study, a large population-based random-sample cohort of men aged 40-70 yr at baseline, to establish normative age trends for serum level of T and related hormones in middle-aged men and to test whether general health status affected the age trends. Of 1,709 men enrolled in 1987-1989, 1,156 were followed up 7-10 yr afterward. By repeated-measures statistical analysis, we estimated simultaneously the cross-sectional age trend of each hormone between subjects within the baseline data, the cross-sectional trend between subjects within the follow-up data, and the longitudinal trend within subjects between baseline and follow-up. Total T declined cross-sectionally at 0.8%/yr of age within the follow-up data, whereas both free and albumin-bound T declined at about 2%/yr, all significantly more steeply than within the baseline data. Sex hormone-binding globulin increased cross-sectionally at 1.6%/yr in the follow-up data, similarly to baseline. The longitudinal decline within subjects between baseline and follow-up was considerably steeper than the cross-sectional trend within measurement times for total T (1.6%/yr) and bioavailable T (2-3%/yr). Dehydroepiandrosterone, dehydroepiandrosterone sulfate, cortisol, and estrone showed significant longitudinal declines, whereas dihydrotestosterone, pituitary gonadotropins, and PRL rose longitudinally. Apparent good health, defined as absence of chronic illness, prescription medication, obesity, or excessive drinking, added 10-15% to the level of several androgens and attenuated the cross-sectional trends in T and LH but did not otherwise affect longitudinal or cross-sectional trends. The paradoxical finding that longitudinal age trends were steeper than cross-sectional trends suggests that incident poor health may accelerate the age-related decline in androgen levels.

Journal ArticleDOI
TL;DR: Recent progress in understanding the function and regulation of MAP kinase pathways in these phases of immune responses in mammalian species is summarized.
Abstract: MAP kinases are among the most ancient signal transduction pathways and are widely used throughout evolution in many physiological processes. In mammalian species, MAP kinases are involved in all aspects of immune responses, from the initiation phase of innate immunity, to activation of adaptive immunity, and to cell death when immune function is complete. In this review, we summarize recent progress in understanding the function and regulation of MAP kinase pathways in these phases of immune responses.

Journal ArticleDOI
TL;DR: This paper found that firms with higher transient institutional ownership, greater reliance on implicit claims with their stakeholders, and higher value-relevance of earnings are more likely to meet or exceed expectations at the earnings announcement.
Abstract: Recent reports in the business press allege that managers take actions to avoid negative earnings surprises. I hypothesize that certain firm characteristics are associated with greater incentives to avoid negative surprises. I find that firms with higher transient institutional ownership, greater reliance on implicit claims with their stakeholders, and higher value‐relevance of earnings are more likely to meet or exceed expectations at the earnings announcement. I also examine whether firms manage earnings upward or guide analysts' forecasts downward to avoid missing expectations at the earnings announcement. I examine the relation between firm characteristics and the probability (conditional on meeting analysts' expectations) of having (1) positive abnormal accruals, and (2) forecasts that are lower than expected (using a model of prior earnings changes). Overall, the results suggest that both mechanisms play a role in avoiding negative earnings surprises.

Journal ArticleDOI
TL;DR: A threshold level of LIP activity appears to mark the completion of the decision process and to govern the tradeoff between accuracy and speed of perception, suggesting that neurons in LIP integrate time-varying signals that originate in the extrastriate visual cortex.
Abstract: Decisions about the visual world can take time to form, especially when information is unreliable. We studied the neural correlate of gradual decision formation by recording activity from the lateral intraparietal cortex (area LIP) of rhesus monkeys during a combined motion-discrimination reaction-time task. Monkeys reported the direction of random-dot motion by making an eye movement to one of two peripheral choice targets, one of which was within the response field of the neuron. We varied the difficulty of the task and measured both the accuracy of direction discrimination and the time required to reach a decision. Both the accuracy and speed of decisions increased as a function of motion strength. During the period of decision formation, the epoch between onset of visual motion and the initiation of the eye movement response, LIP neurons underwent ramp-like changes in their discharge rate that predicted the monkey's decision. A steeper rise in spike rate was associated with stronger stimulus motion and shorter reaction times. The observations suggest that neurons in LIP integrate time-varying signals that originate in the extrastriate visual cortex, accumulating evidence for or against a specific behavioral response. A threshold level of LIP activity appears to mark the completion of the decision process and to govern the tradeoff between accuracy and speed of perception.