scispace - formally typeset
Search or ask a question

Showing papers by "Brigham Young University published in 2013"


Journal ArticleDOI
TL;DR: In addition to the APOE locus (encoding apolipoprotein E), 19 loci reached genome-wide significance (P < 5 × 10−8) in the combined stage 1 and stage 2 analysis, of which 11 are newly associated with Alzheimer's disease.
Abstract: Eleven susceptibility loci for late-onset Alzheimer's disease (LOAD) were identified by previous studies; however, a large portion of the genetic risk for this disease remains unexplained. We conducted a large, two-stage meta-analysis of genome-wide association studies (GWAS) in individuals of European ancestry. In stage 1, we used genotyped and imputed data (7,055,881 SNPs) to perform meta-analysis on 4 previously published GWAS data sets consisting of 17,008 Alzheimer's disease cases and 37,154 controls. In stage 2, 11,632 SNPs were genotyped and tested for association in an independent set of 8,572 Alzheimer's disease cases and 11,312 controls. In addition to the APOE locus (encoding apolipoprotein E), 19 loci reached genome-wide significance (P < 5 × 10−8) in the combined stage 1 and stage 2 analysis, of which 11 are newly associated with Alzheimer's disease.

3,726 citations


Journal ArticleDOI
TL;DR: Heterozygous rare variants in TREM2 are associated with a significant increase in the risk of Alzheimer's disease.
Abstract: BACKGROUND: Homozygous loss-of-function mutations in TREM2, encoding the triggering receptor expressed on myeloid cells 2 protein, have previously been associated with an autosomal recessive form of early-onset dementia. METHODS: We used genome, exome, and Sanger sequencing to analyze the genetic variability in TREM2 in a series of 1092 patients with Alzheimer's disease and 1107 controls (the discovery set). We then performed a meta-analysis on imputed data for the TREM2 variant rs75932628 (predicted to cause a R47H substitution) from three genomewide association studies of Alzheimer's disease and tested for the association of the variant with disease. We genotyped the R47H variant in an additional 1887 cases and 4061 controls. We then assayed the expression of TREM2 across different regions of the human brain and identified genes that are differentially expressed in a mouse model of Alzheimer's disease and in control mice. RESULTS: We found significantly more variants in exon 2 of TREM2 in patients with Alzheimer's disease than in controls in the discovery set (P=0.02). There were 22 variant alleles in 1092 patients with Alzheimer's disease and 5 variant alleles in 1107 controls (P<0.001). The most commonly associated variant, rs75932628 (encoding R47H), showed highly significant association with Alzheimer's disease (P<0.001). Meta-analysis of rs75932628 genotypes imputed from genomewide association studies confirmed this association (P=0.002), as did direct genotyping of an additional series of 1887 patients with Alzheimer's disease and 4061 controls (P<0.001). Trem2 expression differed between control mice and a mouse model of Alzheimer's disease. CONCLUSIONS: Heterozygous rare variants in TREM2 are associated with a significant increase in the risk of Alzheimer's disease. (Funded by Alzheimer's Research UK and others.).

2,333 citations


Journal ArticleDOI
Christopher J L Murray1, Jerry Puthenpurakal Abraham2, Mohammed K. Ali3, Miriam Alvarado1, Charles Atkinson1, Larry M. Baddour4, David Bartels5, Emelia J. Benjamin6, Kavi Bhalla5, Gretchen L. Birbeck7, Ian Bolliger1, Roy Burstein1, Emily Carnahan1, Honglei Chen8, David Chou1, Sumeet S. Chugh9, Aaron Cohen10, K. Ellicott Colson1, Leslie T. Cooper11, William G. Couser12, Michael H. Criqui13, Kaustubh Dabhadkar3, Nabila Dahodwala14, Goodarz Danaei5, Robert P. Dellavalle15, Don C. Des Jarlais16, Daniel Dicker1, Eric L. Ding5, E. Ray Dorsey17, Herbert C. Duber1, Beth E. Ebel12, Rebecca E. Engell1, Majid Ezzati18, David T. Felson6, Mariel M. Finucane5, Seth Flaxman19, Abraham D. Flaxman1, Thomas D. Fleming1, Mohammad H. Forouzanfar1, Greg Freedman1, Michael Freeman1, Sherine E. Gabriel4, Emmanuela Gakidou1, Richard F. Gillum20, Diego Gonzalez-Medina1, Richard A. Gosselin21, Bridget F. Grant8, Hialy R. Gutierrez22, Holly Hagan23, Rasmus Havmoeller24, Rasmus Havmoeller9, Howard J. Hoffman8, Kathryn H. Jacobsen25, Spencer L. James1, Rashmi Jasrasaria1, Sudha Jayaraman5, Nicole E. Johns1, Nicholas J Kassebaum12, Shahab Khatibzadeh5, Lisa M. Knowlton5, Qing Lan, Janet L Leasher26, Stephen S Lim1, John K Lin5, Steven E. Lipshultz27, Stephanie J. London8, Rafael Lozano, Yuan Lu5, Michael F. Macintyre1, Leslie Mallinger1, Mary M. McDermott28, Michele Meltzer29, George A. Mensah8, Catherine Michaud30, Ted R. Miller31, Charles Mock12, Terrie E. Moffitt32, Ali A. Mokdad1, Ali H. Mokdad1, Andrew E. Moran22, Dariush Mozaffarian33, Dariush Mozaffarian5, Tasha B. Murphy1, Mohsen Naghavi1, K.M. Venkat Narayan3, Robert G. Nelson8, Casey Olives12, Saad B. Omer3, Katrina F Ortblad1, Bart Ostro34, Pamela M. Pelizzari35, David Phillips1, C. Arden Pope36, Murugesan Raju37, Dharani Ranganathan1, Homie Razavi, Beate Ritz38, Frederick P. Rivara12, Thomas Roberts1, Ralph L. Sacco27, Joshua A. Salomon5, Uchechukwu K.A. Sampson39, Ella Sanman1, Amir Sapkota40, David C. Schwebel41, Saeid Shahraz42, Kenji Shibuya43, Rupak Shivakoti17, Donald H. Silberberg14, Gitanjali M Singh5, David Singh44, Jasvinder A. Singh41, David A. Sleet, Kyle Steenland3, Mohammad Tavakkoli5, Jennifer A. Taylor45, George D. Thurston23, Jeffrey A. Towbin46, Monica S. Vavilala12, Theo Vos1, Gregory R. Wagner47, Martin A. Weinstock48, Marc G. Weisskopf5, James D. Wilkinson27, Sarah Wulf1, Azadeh Zabetian3, Alan D. Lopez49 
14 Aug 2013-JAMA
TL;DR: To measure the burden of diseases, injuries, and leading risk factors in the United States from 1990 to 2010 and to compare these measurements with those of the 34 countries in the Organisation for Economic Co-operation and Development (OECD), systematic analysis of descriptive epidemiology was used.
Abstract: Importance Understanding the major health problems in the United States and how they are changing over time is critical for informing national health policy. Objectives To measure the burden of diseases, injuries, and leading risk factors in the United States from 1990 to 2010 and to compare these measurements with those of the 34 countries in the Organisation for Economic Co-operation and Development (OECD) countries. Design We used the systematic analysis of descriptive epidemiology of 291 diseases and injuries, 1160 sequelae of these diseases and injuries, and 67 risk factors or clusters of risk factors from 1990 to 2010 for 187 countries developed for the Global Burden of Disease 2010 Study to describe the health status of the United States and to compare US health outcomes with those of 34 OECD countries. Years of life lost due to premature mortality (YLLs) were computed by multiplying the number of deaths at each age by a reference life expectancy at that age. Years lived with disability (YLDs) were calculated by multiplying prevalence (based on systematic reviews) by the disability weight (based on population-based surveys) for each sequela; disability in this study refers to any short- or long-term loss of health. Disability-adjusted life-years (DALYs) were estimated as the sum of YLDs and YLLs. Deaths and DALYs related to risk factors were based on systematic reviews and meta-analyses of exposure data and relative risks for risk-outcome pairs. Healthy life expectancy (HALE) was used to summarize overall population health, accounting for both length of life and levels of ill health experienced at different ages. Results US life expectancy for both sexes combined increased from 75.2 years in 1990 to 78.2 years in 2010; during the same period, HALE increased from 65.8 years to 68.1 years. The diseases and injuries with the largest number of YLLs in 2010 were ischemic heart disease, lung cancer, stroke, chronic obstructive pulmonary disease, and road injury. Age-standardized YLL rates increased for Alzheimer disease, drug use disorders, chronic kidney disease, kidney cancer, and falls. The diseases with the largest number of YLDs in 2010 were low back pain, major depressive disorder, other musculoskeletal disorders, neck pain, and anxiety disorders. As the US population has aged, YLDs have comprised a larger share of DALYs than have YLLs. The leading risk factors related to DALYs were dietary risks, tobacco smoking, high body mass index, high blood pressure, high fasting plasma glucose, physical inactivity, and alcohol use. Among 34 OECD countries between 1990 and 2010, the US rank for the age-standardized death rate changed from 18th to 27th, for the age-standardized YLL rate from 23rd to 28th, for the age-standardized YLD rate from 5th to 6th, for life expectancy at birth from 20th to 27th, and for HALE from 14th to 26th. Conclusions and Relevance From 1990 to 2010, the United States made substantial progress in improving health. Life expectancy at birth and HALE increased, all-cause death rates at all ages decreased, and age-specific rates of years lived with disability remained stable. However, morbidity and chronic disability now account for nearly half of the US health burden, and improvements in population health in the United States have not kept pace with advances in population health in other wealthy nations.

2,159 citations


Journal ArticleDOI
TL;DR: A current snapshot of high-throughput computational materials design is provided, and the challenges and opportunities that lie ahead are highlighted.
Abstract: High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead.

1,568 citations


Journal ArticleDOI
TL;DR: It is shown how to generate randomly symmetric structures, and how to introduce 'smart' variation operators, learning about preferable local environments, that substantially improve the efficiency of the evolutionary algorithm USPEX and allow reliable prediction of structures with up to ∼200 atoms in the unit cell.

1,010 citations


Journal ArticleDOI
TL;DR: The results of this study support the conclusion that a technology enhanced flipped classroom was both effective and scalable; it better facilitated learning than the simulation-based training and students found this approach to be more motivating in that it allowed for greater differentiation of instruction.
Abstract: The purpose of this research was to explore how technology can be used to teach technological skills and to determine what benefit flipping the classroom might have for students taking an introductory-level college course on spreadsheets in terms of student achievement and satisfaction with the class. A pretest posttest quasi-experimental mixed methods design was utilized to determine any differences in student achievement that might be associated with the instructional approach being used. In addition, the scalability of each approach was evaluated along with students’ perceptions of these approaches to determine the affect each intervention might have on a student’s motivation to learn. The simulation-based instruction tested in this study was found to be an extremely scalable solution but less effective than the regular classroom and flipped classroom approaches in terms of student learning. While students did demonstrate learning gains, the process focus of the simulation’s instruction and assessments frustrated students and decreased their motivation to learn. Students’ attitudes towards the topic, their willingness to refer the course to others, and the likelihood that they would take another course like this were considerably lower than those of students in the flipped or regular classroom situations. The results of this study support the conclusion that a technology enhanced flipped classroom was both effective and scalable; it better facilitated learning than the simulation-based training and students found this approach to be more motivating in that it allowed for greater differentiation of instruction.

826 citations


Journal ArticleDOI
TL;DR: The successful demonstration of electrophoresis and electroosmotic pumping in a microfluidic device provided a nonmechanical method for both fluid control and separation, and integration of multiple processes can be highly enabling for many applications.
Abstract: Microfluidics consist of microfabricated structures for liquid handling, with cross-sections in the 1–500 μm range, and small volume capacity (fL-nL) Capillary tubes connected with fittings,1 although utilizing small volumes, are not considered microfluidics for the purposes of this paper since they are not microfabricated Likewise, millifluidic systems, made by conventional machining tools, are excluded due to their larger feature sizes (>500 μm) Though micromachined systems for gas chromatography were introduced in the 1970’s,2 the field of microfluidics did not gain much traction until the 1990’s3 Silicon and glass were the original materials used, but then the focus shifted to include polymer substrates, and in particular, polydimethylsiloxane (PDMS) Since then the field has grown to encompass a wide variety of materials and applications The successful demonstration of electrophoresis and electroosmotic pumping in a microfluidic device provided a nonmechanical method for both fluid control and separation4 Laser induced fluorescence (LIF) enabled sensitive detection of fluorophores or fluorescently labeled molecules The expanded availability of low-cost printing allowed for cheaper and quicker mask fabrication for use in soft lithography5 Commercial microfluidic systems are now available from Abbott, Agilent, Caliper, Dolomite, Micralyne, Microfluidic Chip Shop, Micrux Technologies and Waters, as a few prominent examples For a more thorough description of the history of microfluidics, we refer the reader to a number of comprehensive, specialized reviews,3, 6–11 as well as a more general 2006 review12 The field of microfluidics offers many advantages compared to carrying out processes through bulk solution chemistry, the first of which relates to a lesson taught to every first-year chemistry student Simply stated, diffusion is slow! Thus, the smaller the distance required for interaction, the faster it will be Smaller channel dimensions also lead to smaller sample volumes (fL-nL), which can reduce the amount of sample or reagents required for testing and analysis Reduced dimensions can also lead to portable devices to enable on-site testing (provided the associated hardware is similarly portable) Finally, integration of multiple processes (like labeling, purification, separation and detection) in a microfluidic device can be highly enabling for many applications Microelectromechanical systems (MEMS) contain integrated electrical and mechanical parts that create a sensor or system Applications of MEMS are ubiquitous, including automobiles, phones, video games and medical and biological sensors13 Micro-total analysis systems, also known as labs-on-a-chip, are the chemical analogue of MEMS, as integrated microfluidic devices that are capable of automating multiple processes relevant to laboratory sciences For example, a typical lab-on-a-chip system might selectively purify a complex mixture (through filtering, antibody capture, etc), then separate target components and detect them Microfluidic devices consist of a core of common components Areas defined by empty space, such as reservoirs (wells), chambers and microchannels, are central to microfluidic systems Positive features, created by areas of solid material, add increased functionality to a chip and can consist of membranes, monoliths, pneumatic controls, beams and pillars Given the ubiquitous nature of negative components, and microchannels in particular, we focus here on a few of their properties Microfluidic channels have small overall volumes, laminar flow and a large surface-to-volume ratio Dimensions of a typical separation channel in microchip electrophoresis (μCE) are: 50 μm width, 15 μm height and 5 cm length for a volume of 375 nL Flow in these devices is normally nonturbulent due to low Reynolds numbers For example, water flowing at 20°C in the above channel at 1 μL/min (222 cm/s) results in a Reynolds number of ~05, where <2000 is laminar flow Since flow is nonturbulent, mixing is normally diffusion-limited Small channel sizes also have a high surface-to-volume ratio, leading to different characteristics from what are commonly found in bulk volumes The material surface can be used to manipulate fluid movement (such as by electroosmotic flow, EOF) and surface interactions For a solution in contact with a charged surface, a double layer of charge is created as oppositely charged ions are attracted to the surface charges This electrical double layer consists of an inner rigid or Stern Layer and an outer diffuse layer An electrostatic potential known as the zeta potential is formed, with the magnitude of the potential decreasing as distance from the surface increases The electrical double layer is the basis for EOF, wherein an applied voltage causes the loosely bound diffuse layer to move towards an electrode, dragging the bulk solution along Charges on the exposed surface also exert a greater influence on the fluid in a channel as its size decreases Larger surface-to-volume ratios are more prone to nonspecific adsorption and surface fouling In particular, non-charged and hydrophobic microdevice surfaces can cause proteins in solution to denature and stick We focus our review on advances in microfluidic systems since 2008 In doing this, we occasionally must cover foundational work in microfluidics that is considerably less recent We do not focus on chemical synthesis applications of microfluidics although it is an expanding area, nor do we delve into lithography, device fabrication or production costs Our specific emphasis herein is on four areas within microfluidics: properties and applications of commonly used materials, basic functions, integration, and selected applications For each of these four topics we provide a concluding section on opportunities for future development, and at the end of this review, we offer general conclusions and prospective for future work in the field Due to the considerable scope of the field of microfluidics, we limit our discussion to selected examples from each area, but cite in-depth reviews for the reader to turn to for further information about specific topics We also refer the reader to recent comprehensive reviews on advances in lab-on-a-chip systems by Arora et al10 and Kovarik et al14

736 citations


Journal ArticleDOI
TL;DR: Six cases of institutional adoption of blended learning are investigated to examine the key issues that can guide university administrators interested in this endeavor and identify and elaborate on core issues related to institutional strategy, structure, and support.
Abstract: There has been rapid growth in blended learning implementation and research focused on course-level issues such as improved learning outcomes, but very limited research focused on institutional policy and adoption issues. More institutional-level blended learning research is needed to guide institutions of higher education in strategically adopting and implementing blended learning on campus. This research investigates six cases of institutional adoption of blended learning to examine the key issues that can guide university administrators interested in this endeavor. Cases were selected to represent institutions at various stages of blended learning adoption including (1) awareness/exploration, (2) adoption/early implementation, and (3) mature implementation/growth. Cases are used to identify and elaborate on core issues related to institutional strategy, structure, and support, spanning the adoption stages.

597 citations


Journal ArticleDOI
TL;DR: This paper examined the relation between managerial ability and earnings quality and found that more able managers are associated with fewer subsequent restatements, higher earnings and accruals persistence, lower errors in the bad debt provision, and higher quality accrual estimations.
Abstract: We examine the relation between managerial ability and earnings quality. We find that earnings quality is positively associated with managerial ability. Specifically, more able managers are associated with fewer subsequent restatements, higher earnings and accruals persistence, lower errors in the bad debt provision, and higher quality accrual estimations. The results are consistent with the premise that managers can and do impact the quality of the judgments and estimates used to form earnings.

567 citations


Journal ArticleDOI
TL;DR: A review of the history of the micro-foundations discussion can be found in this article, where the authors argue that questions of social aggregation and emergence need to be center stage in any discussion of microfoundations.
Abstract: In the extant organizational, management, and strategy literatures there are now frequent calls for microfoundations. However, there is little consensus on what micro- foundations are and what they are not. In this paper we first (briefly) review the history of the microfoundations discussion and then discuss what microfoundations are and are not. We highlight four misconceptions or "half-truths" about microfoundations: (1) that microfoundations are psychology, human resources, or micro-organizational behavior, (2) that borrowed concepts constitute microfoundations, (3) that microfoundations lead to an infinite regress, and (4) that microfoundations deny the role of structure and institutions. We discuss both the partial truths and the misconceptions associated with the above understandings of microfoundations, and we argue that questions of social aggregation and emergence need to be center stage in any discussion of microfoundations. We link our arguments about microfoundations and aggregation with closely related calls for new areas of research, such as "behavioral strategy" and the domain of multilevel human capital research. We discuss various forms of social aggregation and also highlight associated opportunities for future research, with a specific focus on the origins of capabilities and competitive advantage.

525 citations


Journal ArticleDOI
TL;DR: How financial literacy is measured in the current literature is considered, and how well the existing literature addresses whether financial education improves financial literacy or personal financial outcomes is examined.
Abstract: In this article we review the literature on financial literacy, financial education, and consumer financial outcomes. We consider how financial literacy is measured in the current literature, and examine how well the existing literature addresses whether financial education improves financial literacy or personal financial outcomes. We discuss the extent to which a competitive market provides incentives for firms to educate consumers or offer products that facilitate informed choice. We review the literature on alternative policies to improve financial outcomes, and compare the evidence to evidence on the efficacy and cost of financial education. Finally, we discuss directions for future research.

Journal ArticleDOI
TL;DR: The results suggest that more caution should be exercised in genomic medicine settings when analyzing individual genomes, including interpreting positive and negative findings with scrutiny, especially for indels.
Abstract: Background: To facilitate the clinical implementation of genomic medicine by next-generation sequencing, it will be critically important to obtain accurate and consistent variant calls on personal genomes. Multiple software tools for variant calling are available, but it is unclear how comparable these tools are or what their relative merits in real-world scenarios might be. Methods: We sequenced 15 exomes from four families using commercial kits (Illumina HiSeq 2000 platform and Agilent SureSelect version 2 capture kit), with approximately 120X mean coverage. We analyzed the raw data using near-default parameters with five different alignment and variant-calling pipelines (SOAP, BWA-GATK, BWA-SNVer, GNUMAP, and BWA-SAMtools). We additionally sequenced a single whole genome using the sequencing and analysis pipeline from Complete Genomics (CG), with 95% of the exome region being covered by 20 or more reads per base. Finally, we validated 919 single-nucleotide variations (SNVs) and 841 insertions and deletions (indels), including similar fractions of GATK-only, SOAP-only, and shared calls, on the MiSeq platform by amplicon sequencing with approximately 5000X mean coverage. Results: SNV concordance between five Illumina pipelines across all 15 exomes was 57.4%, while 0.5 to 5.1% of variants were called as unique to each pipeline. Indel concordance was only 26.8% between three indel-calling pipelines, even after left-normalizing and intervalizing genomic coordinates by 20 base pairs. There were 11% of CG variants falling within targeted regions in exome sequencing that were not called by any of the Illumina-based exome analysis pipelines. Based on targeted amplicon sequencing on the MiSeq platform, 97.1%, 60.2%, and 99.1% of the GATK-only, SOAP-only and shared SNVs could be validated, but only 54.0%, 44.6%, and 78.1% of the GATKonly, SOAP-only and shared indels could be validated. Additionally, our analysis of two families (one with four individuals and the other with seven), demonstrated additional accuracy gained in variant discovery by having access to genetic data from a multi-generational family. Conclusions: Our results suggest that more caution should be exercised in genomic medicine settings when analyzing individual genomes, including interpreting positive and negative findings with scrutiny, especially for indels. We advocate for renewed collection and sequencing of multi-generational families to increase the overall accuracy of whole genomes.

Journal ArticleDOI
TL;DR: The known genetics of early- and late-onset Alzheimer's disease are reviewed, including APOE, which is a complex disorder with environmental and genetic components leading to disease.
Abstract: Alzheimer's disease is the most common form of dementia and is the only top 10 cause of death in the United States that lacks disease-altering treatments. It is a complex disorder with environmental and genetic components. There are two major types of Alzheimer's disease, early onset and the more common late onset. The genetics of early-onset Alzheimer's disease are largely understood with variants in three different genes leading to disease. In contrast, while several common alleles associated with late-onset Alzheimer's disease, including APOE, have been identified using association studies, the genetics of late-onset Alzheimer's disease are not fully understood. Here we review the known genetics of early- and late-onset Alzheimer's disease.

Journal ArticleDOI
TL;DR: Evidence is found that a sense of belonging predicts how meaningful life is perceived to be and how independent evaluations of participants essays on meaning in life are evaluated.
Abstract: In four methodologically diverse studies (N = 644), we found correlational (Study 1), longitudinal (Study 2), and experimental (Studies 3 and 4) evidence that a sense of belonging predicts how meaningful life is perceived to be. In Study 1 (n = 126), we found a strong positive correlation between sense of belonging and meaningfulness. In Study 2 (n = 248), we found that initial levels of sense of belonging predicted perceived meaningfulness of life, obtained 3 weeks later. Furthermore, initial sense of belonging predicted independent evaluations of participants essays on meaning in life. In Studies 3 (n = 105) and 4 (n = 165), we primed participants with belongingness, social support, or social value and found that those primed with belongingness (Study 3) or who increased in belongingness (Study 4) reported the highest levels of perceived meaning. In Study 4, belonging mediated the relationship between experimental condition and meaning.

Journal ArticleDOI
TL;DR: It is found that the agricultural and early successional land uses harbored unique soil bacterial communities that exhibited distinct temporal patterns that were likely a product of complex interactions between the soil environment and the more diverse plant community.
Abstract: Although numerous studies have investigated changes in soil microbial communities across space, questions about the temporal variability in these communities and how this variability compares across soils have received far less attention. We collected soils on a monthly basis (May to November) from replicated plots representing three land-use types (conventional and reduced-input row crop agricultural plots and early successional grasslands) maintained at a research site in Michigan, USA. Using barcoded pyrosequencing of the 16S rRNA gene, we found that the agricultural and early successional land uses harbored unique soil bacterial communities that exhibited distinct temporal patterns. α-Diversity, the numbers of taxa or lineages, was significantly influenced by the sampling month with the temporal variability in α-diversity exceeding the variability between land-use types. In contrast, differences in community composition across land-use types were reasonably constant across the 7-month period, suggesting that the time of sampling is less important when assessing β-diversity patterns. Communities in the agricultural soils were most variable over time and the changes were significantly correlated with soil moisture and temperature. Temporal shifts in bacterial community composition within the successional grassland plots were less predictable and are likely a product of complex interactions between the soil environment and the more diverse plant community. Temporal variability needs to be carefully assessed when comparing microbial diversity across soil types and the temporal patterns in microbial community structure can not necessarily be generalized across land uses, even if those soils are exposed to the same climatic conditions.

Journal ArticleDOI
01 Nov 2013
TL;DR: In this paper, the authors describe a mathematical model for modifying the pattern to accommodate material thickness in the context of the design, modeling, and testing of a deployable system inspired by an origami six-sided flasher model.
Abstract: The purpose of this work is to create deployment systems with a large ratio of stowed-to-deployed diameter. Deployment from a compact form to a final flat state can be achieved through origami-inspired folding of panels. There are many models capable of this motion when folded in a material with negligible thickness; however, when the application requires the folding of thick, rigid panels, attention must be paid to the effect of material thickness not only on the final folded state, but also during the folding motion (i.e., the panels must not be required to flex to attain the final folded form). The objective is to develop new methods for deployment from a compact folded form to a large circular array (or other final form). This paper describes a mathematical model for modifying the pattern to accommodate material thickness in the context of the design, modeling, and testing of a deployable system inspired by an origami six-sided flasher model. The model is demonstrated in hardware as a 1/20th scale prototype of a deployable solar array for space applications. The resulting prototype has a ratio of stowed-to-deployed diameter of 9.2 (or 1.25 m deployed outer diameter to 0.136 m stowed outer diameter).

Journal ArticleDOI
TL;DR: EIF5A, like its bacterial ortholog EFP, is proposed to stimulate the peptidyl transferase activity of the ribosome and facilitate the reactivity of poor substrates like Pro.

Journal ArticleDOI
TL;DR: In this article, the behavior of developmental CO2 Brayton turbomachinery in response to a fluctuating thermal input, much like the short-term transients experienced in solar environments, is analyzed.

Journal ArticleDOI
TL;DR: In this paper, a family of moduli spaces, a virtual cycle, and a corresponding cohomological eld theory associated to the singularity are described for any nondegenerate, quasi-homogeneous hypersurface singularity.
Abstract: For any nondegenerate, quasi-homogeneous hypersurface singularity, we describe a family of moduli spaces, a virtual cycle, and a corresponding cohomological eld theory associated to the singularity. This theory is analogous to Gromov-Witten theory and generalizes the theory of r-spin curves, which corresponds to the simple singularity Ar 1. We also resolve two outstanding conjectures of Witten. The rst conjecture is that ADE-singularities are self-dual, and the second conjecture is that the total potential functions of ADE-singularities satisfy corresponding ADE-integrable hierarchies. Other cases of integrable hierarchies are also discussed.

Journal ArticleDOI
TL;DR: This work extends the definition of analysis-suitable T-splines to encompass unstructured control grids and develops basis functions which are smooth (rational) polynomials defined in terms of the Bezier extraction framework and which pass standard patch tests.

Journal ArticleDOI
24 Apr 2013-Neuron
TL;DR: In independent data sets, rs9877502 showed a strong association with risk for AD, tangle pathology, and global cognitive decline, illustrating how this endophenotype-based approach can be used to identify new AD risk loci.

Journal ArticleDOI
TL;DR: This article provides a tutorial on methods for analyzing longitudinal substance use data, focusing on Poisson, zero-inflated, and hurdle mixed models, which are types of hierarchical or multilevel models.
Abstract: Critical research questions in the study of addictive behaviors concern how these behaviors change over time: either as the result of intervention or in naturalistic settings. The combination of count outcomes that are often strongly skewed with many zeroes (e.g., days using, number of total drinks, number of drinking consequences) with repeated assessments (e.g., longitudinal follow-up after intervention or daily diary data) present challenges for data analyses. The current article provides a tutorial on methods for analyzing longitudinal substance use data, focusing on Poisson, zero-inflated, and hurdle mixed models, which are types of hierarchical or multilevel models. Two example datasets are used throughout, focusing on drinking-related consequences following an intervention and daily drinking over the past 30 days, respectively. Both datasets as well as R, SAS, Mplus, Stata, and SPSS code showing how to fit the models are available on a supplemental website.

Journal ArticleDOI
TL;DR: Findings provided strong evidence of a Hispanic mortality advantage, with implications for conceptualizing and addressing racial/ethnic health disparities.
Abstract: To investigate the possibility of a Hispanic mortality advantage, we conducted a systematic review and meta-analysis of the published longitudinal literature reporting Hispanic individuals’ mortality from any cause compared with any other race/ethnicity. We searched MEDLINE, PubMed, EMBASE, HealthSTAR, and PsycINFO for published literature from January 1990 to July 2010.Across 58 studies (4 615 747 participants), Hispanic populations had a 17.5% lower risk of mortality compared with other racial groups (odds ratio = 0.825; P < .001; 95% confidence interval = 0.75, 0.91). The difference in mortality risk was greater among older populations and varied by preexisting health conditions, with effects apparent for initially healthy samples and those with cardiovascular diseases. The results also differed by racial group: Hispanics had lower overall risk of mortality than did non-Hispanic Whites and non-Hispanic Blacks, but overall higher risk of mortality than did Asian Americans.These findings provided strong ...

Journal ArticleDOI
TL;DR: Multisite functional connectivity classification of autism outperformed chance using a simple leave-one-out classifier, but exhibited poorer accuracy than for single site results.
Abstract: Background: Systematic differences in functional connectivity MRI metrics have been consistently observed in autism, with predominantly decreased cortico-cortical connectivity. Previous attempts at single subject classification in high-functioning autism using whole brain point-to-point functional connectivity have yielded about 80% accurate classification of autism vs. control subjects across a wide age range. We attempted to replicate the method and results using the Autism Brain Imaging Data Exchange including resting state fMRI data obtained from 964 subjects and 16 separate international sites. Methods: For each of 964 subjects, we obtained pairwise functional connectivity measurements from a lattice of 7266 regions of interest covering the gray matter (26.4 million "connections") after preprocessing that included motion and slice timing correction, coregistration to an anatomic image, normalization to standard space, and voxelwise removal by regression of motion parameters, soft tissue, CSF, and white matter signals. Connections were grouped into multiple bins, and a leave-one-out classifier was evaluated on connections comprising each set of bins. Age, age-squared, gender, handedness, and site were included as covariates for the classifier. Results: Classification accuracy significantly outperformed chance but was much lower for multisite prediction than for previous single site results. As high as 60% accuracy was obtained for whole brain classification, with the best accuracy from connections involving regions of the default mode network, parahippocampal and fusiform gyri, insula, Wernicke Area, and intraparietal sulcus. The classifier score was related to symptom severity, social function, daily living skills, and verbal IQ. Classification accuracy was significantly higher for sites with longer BOLD imaging times. Conclusions: Multisite functional connectivity classification of autism outperformed chance using a simple leave-one-out classifier, but exhibit

Journal ArticleDOI
01 Nov 2013-Science
TL;DR: An information-theoretical approach is used to distinguish the important parameters in two archetypical physics models and traces the emergence of an effective theory for long-scale observables to a compression of the parameter space quantified by the eigenvalues of the Fisher Information Matrix.
Abstract: The microscopically complicated real world exhibits behavior that often yields to simple yet quantitatively accurate descriptions. Predictions are possible despite large uncertainties in microscopic parameters, both in physics and in multiparameter models in other areas of science. We connect the two by analyzing parameter sensitivities in a prototypical continuum theory (diffusion) and at a self-similar critical point (the Ising model). We trace the emergence of an effective theory for long-scale observables to a compression of the parameter space quantified by the eigenvalues of the Fisher Information Matrix. A similar compression appears ubiquitously in models taken from diverse areas of science, suggesting that the parameter space structure underlying effective continuum and universal theories in physics also permits predictive modeling more generally.

Journal ArticleDOI
TL;DR: The relationship between price and theory score corroborates research indicating that higher quality apps are more expensive and offers an opportunity for health and behavior change experts to partner with app developers to incorporate behavior change theories into the development of apps.
Abstract: Objective. To quantify the presence of health behavior theory constructs in iPhone apps targeting physical activity. Methods. This study used a content analysis of 127 apps from Apple’s (App Store) Health & Fitness category. Coders downloaded the apps and then used an established theory-based instrument to rate each app’s inclusion of theoretical constructs from prominent behavior change theories. Five common items were used to measure 20 theoretical constructs, for a total of 100 items. A theory score was calculated for each app. Multiple regression analysis was used to identify factors associated with higher theory scores. Results. Apps were generally observed to be lacking in theoretical content. Theory scores ranged from 1 to 28 on a 100-point scale. The health belief model was the most prevalent theory, accounting for 32% of all constructs. Regression analyses indicated that higher priced apps and apps that addressed a broader activity spectrum were associated with higher total theory scores. Conclus...

Journal ArticleDOI
TL;DR: The authors found that the announcement of Timothy Geithner as nominee for Treasury Secretary in November 2008 produced a cumulative abnormal return for financial firms with which he had a connection, which reflected the perceived impact of relying on the advice of a small network of financial sector executives during a time of acute crisis and heightened policy discretion.
Abstract: The announcement of Timothy Geithner as nominee for Treasury Secretary in November 2008 produced a cumulative abnormal return for financial firms with which he had a connection. This return was about 6% after the first full day of trading and about 12% after ten trading days. There were subsequently abnormal negative returns for connected firms when news broke that Geithner’s confirmation might be derailed by tax issues. Excess returns for connected firms may reflect the perceived impact of relying on the advice of a small network of financial sector executives during a time of acute crisis and heightened policy discretion.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate whether firms and their top executives bear reputational costs from engaging in aggressive tax avoidance activities and conclude that there is little evidence of tax shelter usage leading to reputual costs at the firm level.
Abstract: We investigate whether firms and their top executives bear reputational costs from engaging in aggressive tax avoidance activities. Prior literature has posited that reputational costs partially explain why so many firms apparently forgo the benefits of tax avoidance, the so-called “under-sheltering puzzle.” We employ a database of 118 firms that were subject to public scrutiny for having engaged in tax shelters, representing the largest sample of publicly identified corporate tax shelters analyzed to date. We examine the reputational costs that prior research has shown that firms and managers face in cases of alleged misconduct: increased CEO and CFO turnover, auditor turnover, lost sales, increased advertising costs, and decreased media reputation. Across a battery of tests, we find little evidence that firms or their top executives bear significant reputational costs as a result of being accused of engaging in tax shelter activities. Moreover, we find no decrease in firms’ tax avoidance activities after being accused of tax shelter activity. Finally, in tests of the capital market reaction to news of tax shelter involvement, we find that negative event-period returns fully reverse within a few weeks of the public scrutiny, consistent with a temporary market penalty to tax shelter news. In all, we conclude that there is little evidence of tax shelter usage leading to reputational costs at the firm level.

Journal ArticleDOI
07 Nov 2013-PLOS ONE
TL;DR: Genome-wide Complex Trait Analysis was used to analyze >2 million SNPs for 10,922 individuals from the Alzheimer's Disease Genetics Consortium to assess the phenotypic variance explained first by known late-onset AD loci, and then by all SNPs in the Alzheimer’s disease Genetics Consortium dataset.
Abstract: Alzheimer’s disease (AD) is a complex disorder influenced by environmental and genetic factors. Recent work has identified 11 AD markers in 10 loci. We used Genome-wide Complex Trait Analysis to analyze >2 million SNPs for 10,922 individuals from the Alzheimer’s Disease Genetics Consortium to assess the phenotypic variance explained first by known late-onset AD loci, and then by all SNPs in the Alzheimer’s Disease Genetics Consortium dataset. In all, 33% of total phenotypic variance is explained by all common SNPs. APOE alone explained 6% and other known markers 2%, meaning more than 25% of phenotypic variance remains unexplained by known markers, but is tagged by common SNPs included on genotyping arrays or imputed with HapMap genotypes. Novel AD markers that explain large amounts of phenotypic variance are likely to be rare and unidentifiable using genome-wide association studies. Based on our findings and the current direction of human genetics research, we suggest specific study designs for future studies to identify the remaining heritability of Alzheimer’s disease.

Journal ArticleDOI
TL;DR: An adaptive isogeometric collocation method is explored that is based on local hierarchical refinement of NURBS basis functions and collocation points derived from the corresponding multi-level Greville abscissae, and introduces the concept of weighted collocation that can be consistently developed from the weighted residual form and the two-scale relation of B-splines.