scispace - formally typeset
Search or ask a question

Showing papers by "University of Chicago published in 2008"


Journal ArticleDOI
TL;DR: A fully automated service for annotating bacterial and archaeal genomes that identifies protein-encoding, rRNA and tRNA genes, assigns functions to the genes, predicts which subsystems are represented in the genome, uses this information to reconstruct the metabolic network and makes the output easily downloadable for the user.
Abstract: The number of prokaryotic genome sequences becoming available is growing steadily and is growing faster than our ability to accurately annotate them. We describe a fully automated service for annotating bacterial and archaeal genomes. The service identifies protein-encoding, rRNA and tRNA genes, assigns functions to the genes, predicts which subsystems are represented in the genome, uses this information to reconstruct the metabolic network and makes the output easily downloadable for the user. In addition, the annotated genome can be browsed in an environment that supports comparative analysis with the annotated genomes maintained in the SEED environment. The service normally makes the annotated genome available within 12–24 hours of submission, but ultimately the quality of such a service will be judged in terms of accuracy, consistency, and completeness of the produced annotations. We summarize our attempts to address these issues and discuss plans for incrementally enhancing the service. By providing accurate, rapid annotation freely to the community we have created an important community resource. The service has now been utilized by over 120 external users annotating over 350 distinct genomes.

9,397 citations


Book
08 Apr 2008
TL;DR: In Nudge as discussed by the authors, Thaler and Sunstein argue that human beings are susceptible to various biases that can lead us to blunder and make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself.
Abstract: A groundbreaking discussion of how we can apply the new science of choice architecture to nudge people toward decisions that will improve their lives by making them healthier, wealthier, and more free Every day, we make decisions on topics ranging from personal investments to schools for our children to the meals we eat to the causes we champion. Unfortunately, we often choose poorly. Nobel laureate Richard Thaler and legal scholar and bestselling author Cass Sunstein explain in this important exploration of choice architecture that, being human, we all are susceptible to various biases that can lead us to blunder. Our mistakes make us poorer and less healthy; we often make bad decisions involving education, personal finance, health care, mortgages and credit cards, the family, and even the planet itself. In Nudge, Thaler and Sunstein invite us to enter an alternative world, one that takes our humanness as a given. They show that by knowing how people think, we can design choice environments that make it easier for people to choose what is best for themselves, their families, and their society. Using colorful examples from the most important aspects of life, Thaler and Sunstein demonstrate how thoughtful "choice architecture" can be established to nudge us in beneficial directions without restricting freedom of choice. Nudge offers a unique new take-from neither the left nor the right-on many hot-button issues, for individuals and governments alike. This is one of the most engaging and provocative books to come along in many years.

7,772 citations


Journal ArticleDOI
Jean Bousquet, N. Khaltaev, Alvaro A. Cruz1, Judah A. Denburg2, W. J. Fokkens3, Alkis Togias4, T. Zuberbier5, Carlos E. Baena-Cagnani6, Giorgio Walter Canonica7, C. van Weel8, Ioana Agache9, Nadia Aït-Khaled, Claus Bachert10, Michael S. Blaiss11, Sergio Bonini12, L.-P. Boulet13, Philippe-Jean Bousquet, Paulo Augusto Moreira Camargos14, K-H. Carlsen15, Y. Z. Chen, Adnan Custovic16, Ronald Dahl17, Pascal Demoly, H. Douagui, Stephen R. Durham18, R. Gerth van Wijk19, O. Kalayci19, Michael A. Kaliner20, You Young Kim21, Marek L. Kowalski, Piotr Kuna22, L. T. T. Le23, Catherine Lemière24, Jing Li25, Richard F. Lockey26, S. Mavale-Manuel26, Eli O. Meltzer27, Y. Mohammad28, J Mullol, Robert M. Naclerio29, Robyn E O'Hehir30, K. Ohta31, S. Ouedraogo31, S. Palkonen, Nikolaos G. Papadopoulos32, Gianni Passalacqua7, Ruby Pawankar33, Todor A. Popov34, Klaus F. Rabe35, J Rosado-Pinto36, G. K. Scadding37, F. E. R. Simons38, Elina Toskala39, E. Valovirta40, P. Van Cauwenberge10, De Yun Wang41, Magnus Wickman42, Barbara P. Yawn43, Arzu Yorgancioglu44, Osman M. Yusuf, H. J. Zar45, Isabella Annesi-Maesano46, E.D. Bateman45, A. Ben Kheder47, Daniel A. Boakye48, J. Bouchard, Peter Burney18, William W. Busse49, Moira Chan-Yeung50, Niels H. Chavannes35, A.G. Chuchalin, William K. Dolen51, R. Emuzyte52, Lawrence Grouse53, Marc Humbert, C. M. Jackson54, Sebastian L. Johnston18, Paul K. Keith2, James P. Kemp27, J. M. Klossek55, Désirée Larenas-Linnemann55, Brian J. Lipworth54, Jean-Luc Malo24, Gailen D. Marshall56, Charles K. Naspitz57, K. Nekam, Bodo Niggemann58, Ewa Nizankowska-Mogilnicka59, Yoshitaka Okamoto60, M. P. Orru61, Paul Potter45, David Price62, Stuart W. Stoloff63, Olivier Vandenplas, Giovanni Viegi, Dennis M. Williams64 
Federal University of Bahia1, McMaster University2, University of Amsterdam3, National Institutes of Health4, Charité5, Catholic University of Cordoba6, University of Genoa7, Radboud University Nijmegen8, Transilvania University of Brașov9, Ghent University10, University of Tennessee Health Science Center11, University of Naples Federico II12, Laval University13, Universidade Federal de Minas Gerais14, University of Oslo15, University of Manchester16, Aarhus University17, Imperial College London18, Erasmus University Rotterdam19, George Washington University20, Seoul National University21, Medical University of Łódź22, Hai phong University Of Medicine and Pharmacy23, Université de Montréal24, Guangzhou Medical University25, University of South Florida26, University of California, San Diego27, University of California28, University of Chicago29, Monash University30, Teikyo University31, National and Kapodistrian University of Athens32, Nippon Medical School33, Sofia Medical University34, Leiden University35, Leiden University Medical Center36, University College London37, University of Manitoba38, University of Helsinki39, Finnish Institute of Occupational Health40, National University of Singapore41, Karolinska Institutet42, University of Minnesota43, Celal Bayar University44, University of Cape Town45, Pierre-and-Marie-Curie University46, Tunis University47, University of Ghana48, University of Wisconsin-Madison49, University of British Columbia50, Georgia Regents University51, Vilnius University52, University of Washington53, University of Dundee54, University of Poitiers55, University of Mississippi56, Federal University of São Paulo57, German Red Cross58, Jagiellonian University Medical College59, Chiba University60, American Pharmacists Association61, University of Aberdeen62, University of Nevada, Reno63, University of North Carolina at Chapel Hill64
01 Apr 2008-Allergy
TL;DR: The ARIA guidelines for the management of allergic rhinitis and asthma are similar in both the 1999 ARIA workshop report and the 2008 Update as discussed by the authors, but the GRADE approach is not yet available.
Abstract: Allergic rhinitis is a symptomatic disorder of the nose induced after allergen exposure by an IgE-mediated inflammation of the membranes lining the nose. It is a global health problem that causes major illness and disability worldwide. Over 600 million patients from all countries, all ethnic groups and of all ages suffer from allergic rhinitis. It affects social life, sleep, school and work and its economic impact is substantial. Risk factors for allergic rhinitis are well identified. Indoor and outdoor allergens as well as occupational agents cause rhinitis and other allergic diseases. The role of indoor and outdoor pollution is probably very important, but has yet to be fully understood both for the occurrence of the disease and its manifestations. In 1999, during the Allergic Rhinitis and its Impact on Asthma (ARIA) WHO workshop, the expert panel proposed a new classification for allergic rhinitis which was subdivided into 'intermittent' or 'persistent' disease. This classification is now validated. The diagnosis of allergic rhinitis is often quite easy, but in some cases it may cause problems and many patients are still under-diagnosed, often because they do not perceive the symptoms of rhinitis as a disease impairing their social life, school and work. The management of allergic rhinitis is well established and the ARIA expert panel based its recommendations on evidence using an extensive review of the literature available up to December 1999. The statements of evidence for the development of these guidelines followed WHO rules and were based on those of Shekelle et al. A large number of papers have been published since 2000 and are extensively reviewed in the 2008 Update using the same evidence-based system. Recommendations for the management of allergic rhinitis are similar in both the ARIA workshop report and the 2008 Update. In the future, the GRADE approach will be used, but is not yet available. Another important aspect of the ARIA guidelines was to consider co-morbidities. Both allergic rhinitis and asthma are systemic inflammatory conditions and often co-exist in the same patients. In the 2008 Update, these links have been confirmed. The ARIA document is not intended to be a standard-of-care document for individual countries. It is provided as a basis for physicians, health care professionals and organizations involved in the treatment of allergic rhinitis and asthma in various countries to facilitate the development of relevant local standard-of-care documents for patients.

3,769 citations


Journal ArticleDOI
TL;DR: The open-source metagenomics RAST service provides a new paradigm for the annotation and analysis of metagenomes that is stable, extensible, and freely available to all researchers.
Abstract: Random community genomes (metagenomes) are now commonly used to study microbes in different environments. Over the past few years, the major challenge associated with metagenomics shifted from generating to analyzing sequences. High-throughput, low-cost next-generation sequencing has provided access to metagenomics to a wide range of researchers. A high-throughput pipeline has been constructed to provide high-performance computing to all researchers interested in using metagenomics. The pipeline produces automated functional assignments of sequences in the metagenome by comparing both protein and nucleotide databases. Phylogenetic and functional summaries of the metagenomes are generated, and tools for comparative metagenomics are incorporated into the standard views. User access is controlled to ensure data privacy, but the collaborative environment underpinning the service provides a framework for sharing datasets between multiple users. In the metagenomics RAST, all users retain full control of their data, and everything is available for download in a variety of formats. The open-source metagenomics RAST service provides a new paradigm for the annotation and analysis of metagenomes. With built-in support for multiple data sources and a back end that houses abstract data types, the metagenomics RAST is stable, extensible, and freely available to all researchers. This service has removed one of the primary bottlenecks in metagenome sequence analysis – the availability of high-performance computing for annotating the data. http://metagenomics.nmpdr.org

3,322 citations


Proceedings ArticleDOI
01 Nov 2008
TL;DR: In this article, the authors compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both the two technologies, and compare the advantages of grid computing and cloud computing.
Abstract: Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.

3,132 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A discriminatively trained, multiscale, deformable part model for object detection, which achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge and outperforms the best results in the 2007 challenge in ten out of twenty categories.
Abstract: This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose.

2,893 citations


Journal ArticleDOI
TL;DR: It is found that the Illumina sequencing data are highly replicable, with relatively little technical variation, and thus, for many purposes, it may suffice to sequence each mRNA sample only once (i.e., using one lane).
Abstract: Ultra-high-throughput sequencing is emerging as an attractive alternative to microarrays for genotyping, analysis of methylation patterns, and identification of transcription factor binding sites. Here, we describe an application of the Illumina sequencing (formerly Solexa sequencing) platform to study mRNA expression levels. Our goals were to estimate technical variance associated with Illumina sequencing in this context and to compare its ability to identify differentially expressed genes with existing array technologies. To do so, we estimated gene expression differences between liver and kidney RNA samples using multiple sequencing replicates, and compared the sequencing data to results obtained from Affymetrix arrays using the same RNA samples. We find that the Illumina sequencing data are highly replicable, with relatively little technical variation, and thus, for many purposes, it may suffice to sequence each mRNA sample only once (i.e., using one lane). The information in a single lane of Illumina sequencing data appears comparable to that in a single array in enabling identification of differentially expressed genes, while allowing for additional analyses such as detection of low-expressed genes, alternative splice variants, and novel transcripts. Based on our observations, we propose an empirical protocol and a statistical framework for the analysis of gene expression using ultra-high-throughput sequencing technology.

2,834 citations


Journal ArticleDOI
TL;DR: The results strongly confirm 11 previously reported loci and provide genome-wide significant evidence for 21 additional loci, including the regions containing STAT3, JAK2, ICOSLG, CDKAL1 and ITLN1, which offer promise for informed therapeutic development.
Abstract: Several risk factors for Crohn's disease have been identified in recent genome-wide association studies. To advance gene discovery further, we combined data from three studies on Crohn's disease (a total of 3,230 cases and 4,829 controls) and carried out replication in 3,664 independent cases with a mixture of population-based and family-based controls. The results strongly confirm 11 previously reported loci and provide genome-wide significant evidence for 21 additional loci, including the regions containing STAT3, JAK2, ICOSLG, CDKAL1 and ITLN1. The expanded molecular understanding of the basis of this disease offers promise for informed therapeutic development.

2,584 citations


Journal ArticleDOI
TL;DR: This paper examined the effectiveness of signs requesting hotel guests' participation in an environmental conservation program and found that normative appeals were more effective when describing group behavior that occurred in the setting that most closely matched individuals' immediate situational circumstances, referred to as provincial norms.
Abstract: Two field experiments examined the effectiveness of signs requesting hotel guests’ participation in an environmental conservation program. Appeals employing descriptive norms (e.g., “the majority of guests reuse their towels”) proved superior to a traditional appeal widely used by hotels that focused solely on environmental protection. Moreover, normative appeals were most effective when describing group behavior that occurred in the setting that most closely matched individuals’ immediate situational circumstances (e.g., “the majority of guests in this room reuse their towels”), which we refer to as provincial norms. Theoretical and practical implications for managing proenvironmental efforts are discussed.

2,514 citations


Journal ArticleDOI
TL;DR: A set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes are presented.
Abstract: Research in autophagy continues to accelerate,(1) and as a result many new scientists are entering the field Accordingly, it is important to establish a standard set of criteria for monitoring macroautophagy in different organisms Recent reviews have described the range of assays that have been used for this purpose(2,3) There are many useful and convenient methods that can be used to monitor macroautophagy in yeast, but relatively few in other model systems, and there is much confusion regarding acceptable methods to measure macroautophagy in higher eukaryotes A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers of autophagosomes versus those that measure flux through the autophagy pathway; thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from fully functional autophagy that includes delivery to, and degradation within, lysosomes (in most higher eukaryotes) or the vacuole (in plants and fungi) Here, we present a set of guidelines for the selection and interpretation of the methods that can be used by investigators who are attempting to examine macroautophagy and related processes, as well as by reviewers who need to provide realistic and reasonable critiques of papers that investigate these processes This set of guidelines is not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to verify an autophagic response

2,310 citations


Journal ArticleDOI
TL;DR: In this article, the miR-200 miRNA family was found to directly target the mRNA of the E-cadherin transcriptional repressors ZEB1 (TCF8/δEF1) and ZEB2 (SMAD-interacting protein 1 [SIP1]/ZFXH1B).
Abstract: Cancer progression has similarities with the process of epithelial-to-mesenchymal transition (EMT) found during embryonic development, during which cells down-regulate E-cadherin and up-regulate Vimentin expression. By evaluating the expression of 207 microRNAs (miRNAs) in the 60 cell lines of the drug screening panel maintained by the Nation Cancer Institute, we identified the miR-200 miRNA family as an extraordinary marker for cells that express E-cadherin but lack expression of Vimentin. These findings were extended to primary ovarian cancer specimens. miR-200 was found to directly target the mRNA of the E-cadherin transcriptional repressors ZEB1 (TCF8/δEF1) and ZEB2 (SMAD-interacting protein 1 [SIP1]/ZFXH1B). Ectopic expression of miR-200 caused up-regulation of E-cadherin in cancer cell lines and reduced their motility. Conversely, inhibition of miR-200 reduced E-cadherin expression, increased expression of Vimentin, and induced EMT. Our data identify miR-200 as a powerful marker and determining factor of the epithelial phenotype of cancer cells.

Journal ArticleDOI
TL;DR: In this article, the authors examine the economic consequences of mandatory International Financial Reporting Standards (IFRS) reporting around the world and find that market liquidity increases around the time of the introduction of IFRS.
Abstract: This paper examines the economic consequences of mandatory International Financial Reporting Standards (IFRS) reporting around the world. We analyze the effects on market liquidity, cost of capital, and Tobin's q in 26 countries using a large sample of firms that are mandated to adopt IFRS. We find that, on average, market liquidity increases around the time of the introduction of IFRS. We also document a decrease in firms' cost of capital and an increase in equity valuations, but only if we account for the possibility that the effects occur prior to the official adoption date. Partitioning our sample, we find that the capital-market benefits occur only in countries where firms have incentives to be transparent and where legal enforcement is strong, underscoring the central importance of firms' reporting incentives and countries' enforcement regimes for the quality of financial reporting. Comparing mandatory and voluntary adopters, we find that the capital market effects are most pronounced for firms that voluntarily switch to IFRS, both in the year when they switch and again later, when IFRS become mandatory. While the former result is likely due to self-selection, the latter result cautions us to attribute the capital-market effects for mandatory adopters solely or even primarily to the IFRS mandate. Many adopting countries make concurrent efforts to improve enforcement and governance regimes, which likely play into our findings. Consistent with this interpretation, the estimated liquidity improvements are smaller in magnitude when we analyze them on a monthly basis, which is more likely to isolate IFRS reporting effects.

Journal ArticleDOI
TL;DR: In this article, the authors found that firms that just achieved important earnings benchmarks used less accruals and more real earnings management after the passage of the Sarbanes-Oxley Act (SOX) when compared to similar firms before SOX.
Abstract: We document that accrual‐based earnings management increased steadily from 1987 until the passage of the Sarbanes‐Oxley Act (SOX) in 2002, followed by a significant decline after the passage of SOX. Conversely, the level of real earnings management activities declined prior to SOX and increased significantly after the passage of SOX, suggesting that firms switched from accrual‐based to real earnings management methods after the passage of SOX. We also document that the accrual‐based earnings management activities were particularly high in the period immediately preceding SOX. Consistent with these results, we find that firms that just achieved important earnings benchmarks used less accruals and more real earnings management after SOX when compared to similar firms before SOX. In addition, our analysis provides evidence that the increases in accrual‐based earnings management in the period preceding SOX were concurrent with increases in equity‐based compensation. Our results suggest that stock‐option compo...

Journal ArticleDOI
TL;DR: The benazepril-amlodipine combination was superior in reducing cardiovascular events in patients with hypertension who were at high risk for such events.
Abstract: Background The optimal combination drug therapy for hypertension is not established, although current U.S. guidelines recommend inclusion of a diuretic. We hypothesized that treatment with the combination of an angiotensin-converting–enzyme (ACE) inhibitor and a dihydropyridine calcium-channel blocker would be more effective in reducing the rate of cardiovascular events than treatment with an ACE inhibitor plus a thiazide diuretic. Methods In a randomized, double-blind trial, we assigned 11,506 patients with hypertension who were at high risk for cardiovascular events to receive treatment with either benazepril plus amlodipine or benazepril plus hydrochlorothiazide. The primary end point was the composite of death from cardiovascular causes, nonfatal myocardial infarction, nonfatal stroke, hospitalization for angina, resuscitation after sudden cardiac arrest, and coronary revascularization. Results The baseline characteristics of the two groups were similar. The trial was terminated early after a mean follow-up of 36 months, when the boundary of the prespecified stopping rule was exceeded. Mean blood pressures after dose adjustment were 131.6/73.3 mm Hg in the benazepril–amlodipine group and 132.5/74.4 mm Hg in the benazepril–hydrochlorothiazide group. There were 552 primary-outcome events in the benazepril–amlodipine group (9.6%) and 679 in the benazepril–hydrochlorothiazide group (11.8%), representing an absolute risk reduction with benazepril–amlodipine therapy of 2.2% and a relative risk reduction of 19.6% (hazard ratio, 0.80, 95% confidence interval [CI], 0.72 to 0.90; P<0.001). For the secondary end point of death from cardiovascular causes, nonfatal myocardial infarction, and nonfatal stroke, the hazard ratio was 0.79 (95% CI, 0.67 to 0.92; P = 0.002). Rates of adverse events were consistent with those observed from clinical experience with the study drugs. Conclusions The benazepril–amlodipine combination was superior to the benazepril–hydrochlorothiazide combination in reducing cardiovascular events in patients with hypertension who were at high risk for such events. (ClinicalTrials.gov number, NCT00170950.)

Journal ArticleDOI
23 Oct 2008-Nature
TL;DR: It is found that MyD88 deficiency changes the composition of the distal gut microbiota, and that exposure to the microbiota of specific pathogen-free MyD 88-negative NOD donors attenuates T1D in germ-free NOD recipients.
Abstract: Type 1 diabetes (T1D) is a debilitating autoimmune disease that results from T-cell-mediated destruction of insulin-producing beta-cells. Its incidence has increased during the past several decades in developed countries, suggesting that changes in the environment (including the human microbial environment) may influence disease pathogenesis. The incidence of spontaneous T1D in non-obese diabetic (NOD) mice can be affected by the microbial environment in the animal housing facility or by exposure to microbial stimuli, such as injection with mycobacteria or various microbial products. Here we show that specific pathogen-free NOD mice lacking MyD88 protein (an adaptor for multiple innate immune receptors that recognize microbial stimuli) do not develop T1D. The effect is dependent on commensal microbes because germ-free MyD88-negative NOD mice develop robust diabetes, whereas colonization of these germ-free MyD88-negative NOD mice with a defined microbial consortium (representing bacterial phyla normally present in human gut) attenuates T1D. We also find that MyD88 deficiency changes the composition of the distal gut microbiota, and that exposure to the microbiota of specific pathogen-free MyD88-negative NOD donors attenuates T1D in germ-free NOD recipients. Together, these findings indicate that interaction of the intestinal microbes with the innate immune system is a critical epigenetic factor modifying T1D predisposition.

Journal ArticleDOI
TL;DR: An iterative algorithm, based on recent work in compressive sensing, that minimizes the total variation of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative is developed.
Abstract: An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories.

Journal ArticleDOI
TL;DR: In this paper, the mass function of dark matter halos is measured in a large set of collisionless cosmological simulations of flat ΛCDM cosmology and investigated its evolution at -->z 2.5.
Abstract: We measure the mass function of dark matter halos in a large set of collisionless cosmological simulations of flat ΛCDM cosmology and investigate its evolution at -->z 2. Halos are identified as isolated density peaks, and their masses are measured within a series of radii enclosing specific overdensities. We argue that these spherical overdensity masses are more directly linked to cluster observables than masses measured using the friends-of-friends algorithm (FOF), and are therefore preferable for accurate forecasts of halo abundances. Our simulation set allows us to calibrate the mass function at -->z = 0 for virial masses in the range -->1011 h−1 M☉ ≤ M≤ 1015 h−1 M☉ to 5%, improving on previous results by a factor of 2-3. We derive fitting functions for the halo mass function in this mass range for a wide range of overdensities, both at -->z = 0 and earlier epochs. Earlier studies have sought to calibrate a universal mass function, in the sense that the same functional form and parameters can be used for different cosmologies and redshifts when expressed in appropriate variables. In addition to our fitting formulae, our main finding is that the mass function cannot be represented by a universal function at this level or accuracy. The amplitude of the universal function decreases monotonically by 20%-50%, depending on the mass definition, from -->z = 0 to 2.5. We also find evidence for redshift evolution in the overall shape of the mass function.

Journal ArticleDOI
TL;DR: It is suggested that a wake up and breathe protocol that pairs daily spontaneous awakening trials (ie, interruption of sedatives) with daily spontaneous breathing trials results in better outcomes for mechanically ventilated patients in intensive care than current standard approaches and should become routine practice.

Journal ArticleDOI
TL;DR: Thanks to gold-based catalysts, various organic transformations have been accessible under facile conditions with both high yields and chemoselectivity.
Abstract: Thanks to its unusual stability, metallic gold has been used for thousands of years in jewelry, currency, chinaware, and so forth. However, gold had not become the chemists’ “precious metal” until very recently. In the past few years, reports on gold-catalyzed organic transformations have increased substantially. Thanks to gold-based catalysts, various organic transformations have been accessible under facile conditions with both high yields and chemoselectivity.

Journal ArticleDOI
TL;DR: A dynamic factor model is estimated to solve the problem of endogeneity of inputs and multiplicity of inputs relative to instruments and the role of family environments in shaping these skills at different stages of the life cycle of the child.
Abstract: This paper estimates models of the evolution of cognitive and noncognitive skills and explores the role of family environments in shaping these skills at different stages of the life cycle of the child. Central to this analysis is identification of the technology of skill formation. We estimate a dynamic factor model to solve the problem of endogeneity of inputs and multiplicity of inputs relative to instruments. We identify the scale of the factors by estimating their effects on adult outcomes. In this fashion we avoid reliance on test scores and changes in test scores that have no natural metric. Parental investments are generally more effective in raising noncognitive skills. Noncognitive skills promote the formation of cognitive skills but, in most specifications of our model, cognitive skills do not promote the formation of noncognitive skills. Parental inputs have different effects at different stages of the child’s life cycle with cognitive skills affected more at early ages and noncognitive skills affected more at later ages.

Journal ArticleDOI
Abstract: Ten years ago, the discovery that the expansion of the universe is accelerating put in place the last major building block of the present cosmological model, in which the universe is composed of 4% baryons, 20% dark matter, and 76% dark energy. At the same time, it posed one of the most profound mysteries in all of science, with deep connections to both astrophysics and particle physics. Cosmic acceleration could arise from the repulsive gravity of dark energy—for example, the quantum energy of the vacuum—or it may signal that general relativity (GR) breaks down on cosmological scales and must be replaced. We review the present observational evidence for cosmic acceleration and what it has revealed about dark energy, discuss the various theoretical ideas that have been proposed to explain acceleration, and describe the key observational probes that will shed light on this enigma in the coming years.

Journal ArticleDOI
Jennifer K. Adelman-McCarthy1, Marcel A. Agüeros2, S. Allam3, S. Allam1  +170 moreInstitutions (65)
TL;DR: The Sixth Data Release of the Sloan Digital Sky Survey (SDS) as discussed by the authors contains images and parameters of roughly 287 million objects over 9583 deg(2), including scans over a large range of Galactic latitudes and longitudes.
Abstract: This paper describes the Sixth Data Release of the Sloan Digital Sky Survey. With this data release, the imaging of the northern Galactic cap is now complete. The survey contains images and parameters of roughly 287 million objects over 9583 deg(2), including scans over a large range of Galactic latitudes and longitudes. The survey also includes 1.27 million spectra of stars, galaxies, quasars, and blank sky ( for sky subtraction) selected over 7425 deg2. This release includes much more stellar spectroscopy than was available in previous data releases and also includes detailed estimates of stellar temperatures, gravities, and metallicities. The results of improved photometric calibration are now available, with uncertainties of roughly 1% in g, r, i, and z, and 2% in u, substantially better than the uncertainties in previous data releases. The spectra in this data release have improved wavelength and flux calibration, especially in the extreme blue and extreme red, leading to the qualitatively better determination of stellar types and radial velocities. The spectrophotometric fluxes are now tied to point-spread function magnitudes of stars rather than fiber magnitudes. This gives more robust results in the presence of seeing variations, but also implies a change in the spectrophotometric scale, which is now brighter by roughly 0.35 mag. Systematic errors in the velocity dispersions of galaxies have been fixed, and the results of two independent codes for determining spectral classifications and red-shifts are made available. Additional spectral outputs are made available, including calibrated spectra from individual 15 minute exposures and the sky spectrum subtracted from each exposure. We also quantify a recently recognized underestimation of the brightnesses of galaxies of large angular extent due to poor sky subtraction; the bias can exceed 0.2 mag for galaxies brighter than r = 14 mag.

Journal ArticleDOI
TL;DR: Investigation of the persuasive impact and detectability of normative social influence shows that normative messages can be a powerful lever of persuasion but that their influence is underdetected.
Abstract: The present research investigated the persuasive impact and detectability of normative social influence. The first study surveyed 810 Californians about energy conservation and found that descriptive normative beliefs were more predictive of behavior than were other relevant beliefs, even though respondents rated such norms as least important in their conservation decisions. Study 2, a field experiment, showed that normative social influence produced the greatest change in behavior compared to information highlighting other reasons to conserve, even though respondents rated the normative information as least motivating. Results show that normative messages can be a powerful lever of persuasion but that their influence is underdetected.

Journal ArticleDOI
06 Nov 2008-Nature
TL;DR: Despite low average levels of genetic differentiation among Europeans, there is a close correspondence between genetic and geographic distances; indeed, a geographical map of Europe arises naturally as an efficient two-dimensional summary of genetic variation in Europeans.
Abstract: Understanding the genetic structure of human populations is of fundamental interest to medical, forensic and anthropological sciences. Advances in high-throughput genotyping technology have markedly improved our understanding of global patterns of human genetic variation and suggest the potential to use large samples to uncover variation among closely spaced populations. Here we characterize genetic variation in a sample of 3,000 European individuals genotyped at over half a million variable DNA sites in the human genome. Despite low average levels of genetic differentiation among Europeans, we find a close correspondence between genetic and geographic distances; indeed, a geographical map of Europe arises naturally as an efficient two-dimensional summary of genetic variation in Europeans. The results emphasize that when mapping the genetic basis of a disease phenotype, spurious associations can arise if genetic structure is not properly accounted for. In addition, the results are relevant to the prospects of genetic ancestry testing; an individual's DNA can be used to infer their geographic origin with surprising accuracy-often to within a few hundred kilometres.

Journal ArticleDOI
TL;DR: In a generic parametric framework, it is shown that 2SRI is consistent and 2SPS is not and can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

Journal ArticleDOI
TL;DR: In this article, the impact of cross-bank liquidity variation induced by unanticipated nuclear tests in Pakistan was examined by exploiting crossbank liquidity variations induced by the nuclear tests, and it was shown that for the same firm borrowing from two different banks, its loan from the bank experiencing a 1 percent larger decline in liquidity drops by an additional 0.6 percent.
Abstract: We examine the impact of liquidity shocks by exploiting cross-bank liquidity variation induced by unanticipated nuclear tests in Pakistan. We show that for the same firm borrowing from two different banks, its loan from the bank experiencing a 1 percent larger decline in liquidity drops by an additional 0.6 percent. While banks pass their liquidity shocks on to firms, large firms-particularly those with strong business or political ties-completely compensate this loss by additional borrowing through the credit market. Small firms are unable to do so and face large drops in overall borrowing and increased financial distress.

Journal ArticleDOI
TL;DR: The authors showed that the impact of trade barriers on trade flows is dampened by the elasticity of substitution, and not magnified by trade barriers, and that trade barriers have little impact on bilateral trade flows.
Abstract: By considering a model with identical firms, Paul Krugman (1980) predicts that a higher elas ticity of substitution between goods magnifies the impact of trade barriers on trade flows. In this paper, I introduce firm heterogeneity in a simple model of international trade. When the distribu tion of productivity across firms is Pareto, which is close to the observed size distribution of US firms, the predictions of the Krugman model with representative firms are overturned: the impact of trade barriers on trade flows is dampened by the elasticity of substitution, and not magnified. In Krugman (1980), identical countries trade differentiated goods despite the presence of trade barriers because consumers have a preference for variety. If goods are less substitutable, con sumers are willing to buy foreign varieties even at a higher cost, and trade barriers have little impact on bilateral trade flows. Total exports from country A to country B are given by the following expression:

Journal ArticleDOI
TL;DR: In this paper, the authors used the photometric parallax method to estimate the distances to ~48 million stars detected by the Sloan Digital Sky Survey (SDSS) and map their three-dimensional number density distribution in the Galaxy.
Abstract: Using the photometric parallax method we estimate the distances to ~48 million stars detected by the Sloan Digital Sky Survey (SDSS) and map their three-dimensional number density distribution in the Galaxy. The currently available data sample the distance range from 100 pc to 20 kpc and cover 6500 deg2 of sky, mostly at high Galactic latitudes (|b| > 25). These stellar number density maps allow an investigation of the Galactic structure with no a priori assumptions about the functional form of its components. The data show strong evidence for a Galaxy consisting of an oblate halo, a disk component, and a number of localized overdensities. The number density distribution of stars as traced by M dwarfs in the solar neighborhood (D < 2 kpc) is well fit by two exponential disks (the thin and thick disk) with scale heights and lengths, bias corrected for an assumed 35% binary fraction, of H1 = 300 pc and L1 = 2600 pc, and H2 = 900 pc and L2 = 3600 pc, and local thick-to-thin disk density normalization ρthick(R☉)/ρthin(R☉) = 12% . We use the stars near main-sequence turnoff to measure the shape of the Galactic halo. We find a strong preference for oblate halo models, with best-fit axis ratio c/a = 0.64, ρH ∝ r−2.8 power-law profile, and the local halo-to-thin disk normalization of 0.5%. Based on a series of Monte Carlo simulations, we estimate the errors of derived model parameters not to be larger than ~20% for the disk scales and ~10% for the density normalization, with largest contributions to error coming from the uncertainty in calibration of the photometric parallax relation and poorly constrained binary fraction. While generally consistent with the above model, the measured density distribution shows a number of statistically significant localized deviations. In addition to known features, such as the Monoceros stream, we detect two overdensities in the thick disk region at cylindrical galactocentric radii and heights (R,Z) ~ (6.5,1.5) kpc and (R,Z) ~ (9.5,0.8) kpc and a remarkable density enhancement in the halo covering over 1000 deg2 of sky toward the constellation of Virgo, at distances of ~6-20 kpc. Compared to counts in a region symmetric with respect to the l = 0° line and with the same Galactic latitude, the Virgo overdensity is responsible for a factor of 2 number density excess and may be a nearby tidal stream or a low-surface brightness dwarf galaxy merging with the Milky Way. The u − g color distribution of stars associated with it implies metallicity lower than that of thick disk stars and consistent with the halo metallicity distribution. After removal of the resolved overdensities, the remaining data are consistent with a smooth density distribution; we detect no evidence of further unresolved clumpy substructure at scales ranging from ~50 pc in the disk to ~1-2 kpc in the halo.

Journal ArticleDOI
TL;DR: Continuous glucose monitoring can be associated with improved glycemic control in adults with type 1 diabetes and further work is needed to identify barriers to effectiveness of continuous monitoring in children and adolescents.
Abstract: BACKGROUND The value of continuous glucose monitoring in the management of type 1 diabetes mellitus has not been determined. METHODS In a multicenter clinical trial, we randomly assigned 322 adults and children who were already receiving intensive therapy for type 1 diabetes to a group with continuous glucose monitoring or to a control group performing home monitoring with a blood glucose meter. All the patients were stratified into three groups according to age and had a glycated hemoglobin level of 7.0 to 10.0%. The primary outcome was the change in the glycated hemoglobin level at 26 weeks. RESULTS The changes in glycated hemoglobin levels in the two study groups varied markedly according to age group (P=0.003), with a significant difference among patients 25 years of age or older that favored the continuous-monitoring group (mean difference in change, -0.53%; 95% confidence interval [CI], -0.71 to -0.35; P<0.001). The between-group difference was not significant among those who were 15 to 24 years of age (mean difference, 0.08; 95% CI, -0.17 to 0.33; P=0.52) or among those who were 8 to 14 years of age (mean difference, -0.13; 95% CI, -0.38 to 0.11; P=0.29). Secondary glycated hemoglobin outcomes were better in the continuous-monitoring group than in the control group among the oldest and youngest patients but not among those who were 15 to 24 years of age. The use of continuous glucose monitoring averaged 6.0 or more days per week for 83% of patients 25 years of age or older, 30% of those 15 to 24 years of age, and 50% of those 8 to 14 years of age. The rate of severe hypoglycemia was low and did not differ between the two study groups; however, the trial was not powered to detect such a difference. CONCLUSIONS Continuous glucose monitoring can be associated with improved glycemic control in adults with type 1 diabetes. Further work is needed to identify barriers to effectiveness of continuous monitoring in children and adolescents. (ClinicalTrials.gov number, NCT00406133.)

Journal ArticleDOI
TL;DR: In this article, the authors study the effect of lack of trust on stock market participation and find that less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less.
Abstract: We study the effect that a general lack of trust can have on stock market participation. In deciding whether to buy stocks, investors factor in the risk of being cheated. The perception of this risk is a function of the objective characteristics of the stocks and the subjective characteristics of the investor. Less trusting individuals are less likely to buy stock and, conditional on buying stock, they will buy less. In Dutch and Italian micro data, as well as in cross-country data, we find evidence consistent with lack of trust being an important factor in explaining the limited participation puzzle. THE DECISION TO INVEST IN stocks requires not only an assessment of the risk‐ return trade-off given the existing data, but also an act of faith (trust) that the data in our possession are reliable and that the overall system is fair. Episodes like the collapse of Enron may change not only the distribution of expected payoffs, but also the fundamental trust in the system that delivers those payoffs. Most of us will not enter a three-card game played on the street, even after observing a lot of rounds (and thus getting an estimate of the “true” distribution of payoffs). The reason is that we do not trust the fairness of the game (and the person playing it). In this paper, we claim that for many people, especially people unfamiliar with finance, the stock market is not intrinsically different from the three-card game. They need to have trust in the fairness of the game and in the reliability of the numbers to invest in it. We focus on trust to explain differences in stock market participation across individuals and across countries. We define trust as the subjective probability individuals attribute to the possibility of being cheated. This subjective probability is partly based on objective