scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 2009"


Journal ArticleDOI
TL;DR: Bowtie extends previous Burrows-Wheeler techniques with a novel quality-aware backtracking algorithm that permits mismatches and can be used simultaneously to achieve even greater alignment speeds.
Abstract: Bowtie is an ultrafast, memory-efficient alignment program for aligning short DNA sequence reads to large genomes. For the human genome, Burrows-Wheeler indexing allows Bowtie to align more than 25 million reads per CPU hour with a memory footprint of approximately 1.3 gigabytes. Bowtie extends previous Burrows-Wheeler techniques with a novel quality-aware backtracking algorithm that permits mismatches. Multiple processor cores can be used simultaneously to achieve even greater alignment speeds. Bowtie is open source http://bowtie.cbcb.umd.edu.

20,335 citations


Journal ArticleDOI
TL;DR: The TopHat pipeline is much faster than previous systems, mapping nearly 2.2 million reads per CPU hour, which is sufficient to process an entire RNA-Seq experiment in less than a day on a standard desktop computer.
Abstract: Motivation: A new protocol for sequencing the messenger RNA in a cell, known as RNA-Seq, generates millions of short sequence fragments in a single run. These fragments, or ‘reads’, can be used to measure levels of gene expression and to identify novel splice variants of genes. However, current software for aligning RNA-Seq data to a genome relies on known splice junctions and cannot identify novel ones. TopHat is an efficient read-mapping algorithm designed to align reads from an RNA-Seq experiment to a reference genome without relying on known splice sites. Results: We mapped the RNA-Seq reads from a recent mammalian RNA-Seq experiment and recovered more than 72% of the splice junctions reported by the annotation-based software from that study, along with nearly 20 000 previously unreported junctions. The TopHat pipeline is much faster than previous systems, mapping nearly 2.2 million reads per CPU hour, which is sufficient to process an entire RNA-Seq experiment in less than a day on a standard desktop computer. We describe several challenges unique to ab initio splice site discovery from RNA-Seq reads that will require further algorithm development. Availability: TopHat is free, open-source software available from http://tophat.cbcb.umd.edu Contact: ude.dmu.sc@eloc Supplementary information: Supplementary data are available at Bioinformatics online.

11,473 citations


Journal ArticleDOI
TL;DR: A series of improvements to the spectroscopic reductions are described, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities.
Abstract: This paper describes the Seventh Data Release of the Sloan Digital Sky Survey (SDSS), marking the completion of the original goals of the SDSS and the end of the phase known as SDSS-II. It includes 11,663 deg^2 of imaging data, with most of the ~2000 deg^2 increment over the previous data release lying in regions of low Galactic latitude. The catalog contains five-band photometry for 357 million distinct objects. The survey also includes repeat photometry on a 120° long, 2°.5 wide stripe along the celestial equator in the Southern Galactic Cap, with some regions covered by as many as 90 individual imaging runs. We include a co-addition of the best of these data, going roughly 2 mag fainter than the main survey over 250 deg^2. The survey has completed spectroscopy over 9380 deg^2; the spectroscopy is now complete over a large contiguous area of the Northern Galactic Cap, closing the gap that was present in previous data releases. There are over 1.6 million spectra in total, including 930,000 galaxies, 120,000 quasars, and 460,000 stars. The data release includes improved stellar photometry at low Galactic latitude. The astrometry has all been recalibrated with the second version of the USNO CCD Astrograph Catalog, reducing the rms statistical errors at the bright end to 45 milliarcseconds per coordinate. We further quantify a systematic error in bright galaxy photometry due to poor sky determination; this problem is less severe than previously reported for the majority of galaxies. Finally, we describe a series of improvements to the spectroscopic reductions, including better flat fielding and improved wavelength calibration at the blue end, better processing of objects with extremely strong narrow emission lines, and an improved determination of stellar metallicities.

5,665 citations


Journal ArticleDOI
W. B. Atwood1, A. A. Abdo2, A. A. Abdo3, Markus Ackermann4  +289 moreInstitutions (37)
TL;DR: The Large Area Telescope (Fermi/LAT) as mentioned in this paper is the primary instrument on the Fermi Gamma-ray Space Telescope, which is an imaging, wide field-of-view, high-energy gamma-ray telescope, covering the energy range from below 20 MeV to more than 300 GeV.
Abstract: (Abridged) The Large Area Telescope (Fermi/LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy gamma-ray telescope, covering the energy range from below 20 MeV to more than 300 GeV. This paper describes the LAT, its pre-flight expected performance, and summarizes the key science objectives that will be addressed. On-orbit performance will be presented in detail in a subsequent paper. The LAT is a pair-conversion telescope with a precision tracker and calorimeter, each consisting of a 4x4 array of 16 modules, a segmented anticoincidence detector that covers the tracker array, and a programmable trigger and data acquisition system. Each tracker module has a vertical stack of 18 x,y tracking planes, including two layers (x and y) of single-sided silicon strip detectors and high-Z converter material (tungsten) per tray. Every calorimeter module has 96 CsI(Tl) crystals, arranged in an 8 layer hodoscopic configuration with a total depth of 8.6 radiation lengths. The aspect ratio of the tracker (height/width) is 0.4 allowing a large field-of-view (2.4 sr). Data obtained with the LAT are intended to (i) permit rapid notification of high-energy gamma-ray bursts (GRBs) and transients and facilitate monitoring of variable sources, (ii) yield an extensive catalog of several thousand high-energy sources obtained from an all-sky survey, (iii) measure spectra from 20 MeV to more than 50 GeV for several hundred sources, (iv) localize point sources to 0.3 - 2 arc minutes, (v) map and obtain spectra of extended sources such as SNRs, molecular clouds, and nearby galaxies, (vi) measure the diffuse isotropic gamma-ray background up to TeV energies, and (vii) explore the discovery space for dark matter.

3,666 citations


Journal ArticleDOI
TL;DR: This comprehensive global assessment of 215 studies found that seagrasses have been disappearing at a rate of 110 km2 yr−1 since 1980 and that 29% of the known areal extent has disappeared since seagRass areas were initially recorded in 1879.
Abstract: Coastal ecosystems and the services they provide are adversely affected by a wide variety of human activities. In particular, seagrass meadows are negatively affected by impacts accruing from the billion or more people who live within 50 km of them. Seagrass meadows provide important ecosystem services, including an estimated $1.9 trillion per year in the form of nutrient cycling; an order of magnitude enhancement of coral reef fish productivity; a habitat for thousands of fish, bird, and invertebrate species; and a major food source for endangered dugong, manatee, and green turtle. Although individual impacts from coastal development, degraded water quality, and climate change have been documented, there has been no quantitative global assessment of seagrass loss until now. Our comprehensive global assessment of 215 studies found that seagrasses have been disappearing at a rate of 110 km(2) yr(-1) since 1980 and that 29% of the known areal extent has disappeared since seagrass areas were initially recorded in 1879. Furthermore, rates of decline have accelerated from a median of 0.9% yr(-1) before 1940 to 7% yr(-1) since 1990. Seagrass loss rates are comparable to those reported for mangroves, coral reefs, and tropical rainforests and place seagrass meadows among the most threatened ecosystems on earth.

3,088 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles is studied. But the authors employ a vector autoregressive (VAR) modeling approach.
Abstract: The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations from existing members, outbound WOM can be precisely tracked. Along with traditional marketing, WOM can then be linked to the number of new members subsequently joining the site (sign-ups). Because of the endogeneity among WOM, new sign-ups, and traditional marketing activity, the authors employ a vector autoregressive (VAR) modeling approach. Estimates from the VAR model show that WOM referrals have substantially longer carryover effects than traditional marketing actions and produce substantially higher response elasticities. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper-bound estimate for the financial incentives the firm might offer to stimulate WOM.

2,322 citations


Book
01 Jan 2009
TL;DR: In this article, the transition from limited to open access orders in the social sciences has been discussed and a new research agenda for social sciences is presented. But the transition is not discussed in detail.
Abstract: Preface 1. The conceptual framework 2. The natural state 3. The natural state applied: English land law 4. Open access orders 5. Explaining the transition from limited to open access orders: the doorstep conditions 6. The transition proper 7. A new research agenda for the social sciences Afterword.

1,950 citations


Proceedings Article
07 Dec 2009
TL;DR: New quantitative methods for measuring semantic meaning in inferred topics are presented, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood.
Abstract: Probabilistic topic models are a popular tool for the unsupervised analysis of text, providing both a predictive model of future text and a latent topic representation of the corpus. Practitioners typically assume that the latent space is semantically meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need of quantitative evaluation. In this paper, we present new quantitative methods for measuring semantic meaning in inferred topics. We back these measures with large-scale user studies, showing that they capture aspects of the model that are undetected by previous measures of model quality based on held-out likelihood. Surprisingly, topic models which perform better on held-out likelihood may infer less semantically meaningful topics.

1,878 citations


Journal ArticleDOI
TL;DR: In this article, the authors combined information drawn from studies of individual clouds into a combined and updated statistical analysis of star-formation rates and efficiencies, numbers and lifetimes for spectral energy distribution (SED) classes, and clustering properties.
Abstract: The c2d Spitzer Legacy project obtained images and photometry with both IRAC and MIPS instruments for five large, nearby molecular clouds. Three of the clouds were also mapped in dust continuum emission at 1.1 mm, and optical spectroscopy has been obtained for some clouds. This paper combines information drawn from studies of individual clouds into a combined and updated statistical analysis of star-formation rates and efficiencies, numbers and lifetimes for spectral energy distribution (SED) classes, and clustering properties. Current star-formation efficiencies range from 3% to 6%; if star formation continues at current rates for 10 Myr, efficiencies could reach 15-30%. Star-formation rates and rates per unit area vary from cloud to cloud; taken together, the five clouds are producing about 260 M ☉ of stars per Myr. The star-formation surface density is more than an order of magnitude larger than would be predicted from the Kennicutt relation used in extragalactic studies, reflecting the fact that those relations apply to larger scales, where more diffuse matter is included in the gas surface density. Measured against the dense gas probed by the maps of dust continuum emission, the efficiencies are much higher, with stellar masses similar to masses of dense gas, and the current stock of dense cores would be exhausted in 1.8 Myr on average. Nonetheless, star formation is still slow compared to that expected in a free-fall time, even in the dense cores. The derived lifetime for the Class I phase is 0.54 Myr, considerably longer than some estimates. Similarly, the lifetime for the Class 0 SED class, 0.16 Myr, with the notable exception of the Ophiuchus cloud, is longer than early estimates. If photometry is corrected for estimated extinction before calculating class indicators, the lifetimes drop to 0.44 Myr for Class I and to 0.10 for Class 0. These lifetimes assume a continuous flow through the Class II phase and should be considered median lifetimes or half-lives. Star formation is highly concentrated to regions of high extinction, and the youngest objects are very strongly associated with dense cores. The great majority (90%) of young stars lie within loose clusters with at least 35 members and a stellar density of 1 M ☉ pc–3. Accretion at the sound speed from an isothermal sphere over the lifetime derived for the Class I phase could build a star of about 0.25 M ☉, given an efficiency of 0.3. Building larger mass stars by using higher mass accretion rates could be problematic, as our data confirm and aggravate the "luminosity problem" for protostars. At a given T bol, the values for L bol are mostly less than predicted by standard infall models and scatter over several orders of magnitude. These results strongly suggest that accretion is time variable, with prolonged periods of very low accretion. Based on a very simple model and this sample of sources, half the mass of a star would be accreted during only 7% of the Class I lifetime, as represented by the eight most luminous objects.

1,752 citations


Journal ArticleDOI
22 May 2009-Science
TL;DR: A detailed genetic analysis of most major groups of African populations is provided, suggesting that Africans represent 14 ancestral populations that correlate with self-described ethnicity and shared cultural and/or linguistic properties.
Abstract: Africa is the source of all modern humans, but characterization of genetic variation and of relationships among populations across the continent has been enigmatic. We studied 121 African populations, four African American populations, and 60 non-African populations for patterns of variation at 1327 nuclear microsatellite and insertion/deletion markers. We identified 14 ancestral population clusters in Africa that correlate with self-described ethnicity and shared cultural and/or linguistic properties. We observed high levels of mixed ancestry in most populations, reflecting historical migration events across the continent. Our data also provide evidence for shared ancestry among geographically diverse hunter-gatherer populations (Khoesan speakers and Pygmies). The ancestry of African Americans is predominantly from Niger-Kordofanian (approximately 71%), European (approximately 13%), and other African (approximately 8%) populations, although admixture levels varied considerably among individuals. This study helps tease apart the complex evolutionary history of Africans and African Americans, aiding both anthropological and genetic epidemiologic studies.

1,376 citations


Journal ArticleDOI
TL;DR: The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects and are robust across datasets of varied complexity and sampling level.
Abstract: Numerous studies are currently underway to characterize the microbial communities inhabiting our world. These studies aim to dramatically expand our understanding of the microbial biosphere and, more importantly, hope to reveal the secrets of the complex symbiotic relationship between us and our commensal bacterial microflora. An important prerequisite for such discoveries are computational tools that are able to rapidly and accurately compare large datasets generated from complex bacterial communities to identify features that distinguish them. We present a statistical method for comparing clinical metagenomic samples from two treatment populations on the basis of count data (e.g. as obtained through sequencing) to detect differentially abundant features. Our method, Metastats, employs the false discovery rate to improve specificity in high-complexity environments, and separately handles sparsely-sampled features using Fisher's exact test. Under a variety of simulations, we show that Metastats performs well compared to previously used methods, and significantly outperforms other methods for features with sparse counts. We demonstrate the utility of our method on several datasets including a 16S rRNA survey of obese and lean human gut microbiomes, COG functional profiles of infant and mature gut microbiomes, and bacterial and viral metabolic subsystem data inferred from random sequencing of 85 metagenomes. The application of our method to the obesity dataset reveals differences between obese and lean subjects not reported in the original study. For the COG and subsystem datasets, we provide the first statistically rigorous assessment of the differences between these populations. The methods described in this paper are the first to address clinical metagenomic datasets comprising samples from multiple subjects. Our methods are robust across datasets of varied complexity and sampling level. While designed for metagenomic applications, our software can also be applied to digital gene expression studies (e.g. SAGE). A web server implementation of our methods and freely available source code can be found at http://metastats.cbcb.umd.edu/.

Journal ArticleDOI
TL;DR: The CHARMM-GUI Membrane Builder was expanded to automate the building process of heterogeneous lipid bilayers, with or without a protein and with support for up to 32 different lipid types, to test the efficacy of these new features.

Journal ArticleDOI

Journal ArticleDOI
24 Apr 2009-Science
TL;DR: To understand the biology and evolution of ruminants, the cattle genome was sequenced to about sevenfold coverage and provides a resource for understanding mammalian evolution and accelerating livestock genetic improvement for milk and meat production.
Abstract: To understand the biology and evolution of ruminants, the cattle genome was sequenced to about sevenfold coverage. The cattle genome contains a minimum of 22,000 genes, with a core set of 14,345 orthologs shared among seven mammalian species of which 1217 are absent or undetected in noneutherian (marsupial or monotreme) genomes. Cattle-specific evolutionary breakpoint regions in chromosomes have a higher density of segmental duplications, enrichment of repetitive elements, and species-specific variations in genes associated with lactation and immune responsiveness. Genes involved in metabolism are generally highly conserved, although five metabolic genes are deleted or extensively diverged from their human orthologs. The cattle genome sequence thus provides a resource for understanding mammalian evolution and accelerating livestock genetic improvement for milk and meat production.

Journal ArticleDOI
TL;DR: The goals of the current review are to provide some definitional, theoretical, and methodological clarity to the complex array of terms and constructs previously employed in the study of social withdrawal, and present a developmental framework describing pathways to and from social withdrawal in childhood.
Abstract: Socially withdrawn children frequently refrain from social activities in the presence of peers. The lack of social interaction in childhood may result from a variety of causes, including social fear and anxiety or a preference for solitude. From early childhood through to adolescence, socially withdrawn children are concurrently and predictively at risk for a wide range of negative adjustment outcomes, including socio-emotional difficulties (e.g., anxiety, low self-esteem, depressive symptoms, and internalizing problems), peer difficulties (e.g., rejection, victimization, poor friendship quality), and school difficulties (e.g., poor-quality teacher-child relationships, academic difficulties, school avoidance). The goals of the current review are to (a) provide some definitional, theoretical, and methodological clarity to the complex array of terms and constructs previously employed in the study of social withdrawal; (b) examine the predictors, correlates, and consequences of child and early-adolescent soc...

Journal ArticleDOI
TL;DR: By using independent mapping data and conserved synteny between the cow and human genomes, this work was able to construct an assembly with excellent large-scale contiguity in which a large majority (approximately 91%) of the genome has been placed onto the 30 B. taurus chromosomes.
Abstract: Background: The genome of the domestic cow, Bos taurus, was sequenced using a mixture of hierarchical and whole-genome shotgun sequencing methods. Results: We have assembled the 35 million sequence reads and applied a variety of assembly improvement techniques, creating an assembly of 2.86 billion base pairs that has multiple improvements over previous assemblies: it is more complete, covering more of the genome; thousands of gaps have been closed; many erroneous inversions, deletions, and translocations have been corrected; and thousands of single-nucleotide errors have been corrected. Our evaluation using independent metrics demonstrates that the resulting assembly is substantially more accurate and complete than alternative versions. Conclusions: By using independent mapping data and conserved synteny between the cow and human genomes, we were able to construct an assembly with excellent large-scale contiguity in which a large majority (approximately 91%) of the genome has been placed onto the 30 B. taurus chromosomes. We constructed a new cow-human synteny map that expands upon previous maps. We also identified for the first time a portion of the B. taurus Y chromosome.

Journal ArticleDOI
TL;DR: The Antibiotic Resistance Genes Database (ARDB) is a manually curated database unifying most of the publicly available information on antibiotic resistance and can be used as compendium of antibiotic resistance factors as well as to identify the resistance genes of newly sequenced genes, genomes, or metagenomes.
Abstract: The treatment of infections is increasingly compromised by the ability of bacteria to develop resistance to antibiotics through mutations or through the acquisition of resistance genes. Antibiotic resistance genes also have the potential to be used for bio-terror purposes through genetically modified organisms. In order to facilitate the identification and characterization of these genes, we have created a manually curated database—the Antibiotic Resistance Genes Database (ARDB)—unifying most of the publicly available information on antibiotic resistance. Each gene and resistance type is annotated with rich information, including resistance profile, mechanism of action, ontology, COG and CDD annotations, as well as external links to sequence and protein databases. Our database also supports sequence similarity searches and implements an initial version of a tool for characterizing common mutations that confer antibiotic resistance. The information we provide can be used as compendium of antibiotic resistance factors as well as to identify the resistance genes of newly sequenced genes, genomes, or metagenomes. Currently, ARDB contains resistance information for 13 293 genes, 377 types, 257 antibiotics, 632 genomes, 933 species and 124 genera. ARDB is available at http://ardb.cbcb.umd.edu/.

Journal ArticleDOI
16 Jul 2009-Nature
TL;DR: Analysis of the 363 megabase nuclear genome of the blood fluke, the first sequenced flatworm, and a representative of the Lophotrochozoa offers insights into early events in the evolution of the animals, including the development of a body pattern with bilateral symmetry, and theDevelopment of tissues into organs.
Abstract: Schistosoma mansoni is responsible for the neglected tropical disease schistosomiasis that affects 210 million people in 76 countries. Here we present analysis of the 363 megabase nuclear genome of the blood fluke. It encodes at least 11,809 genes, with an unusual intron size distribution, and new families of micro-exon genes that undergo frequent alternative splicing. As the first sequenced flatworm, and a representative of the Lophotrochozoa, it offers insights into early events in the evolution of the animals, including the development of a body pattern with bilateral symmetry, and the development of tissues into organs. Our analysis has been informed by the need to find new drug targets. The deficits in lipid metabolism that make schistosomes dependent on the host are revealed, and the identification of membrane receptors, ion channels and more than 300 proteases provide new insights into the biology of the life cycle and new targets. Bioinformatics approaches have identified metabolic chokepoints, and a chemogenomic screen has pinpointed schistosome proteins for which existing drugs may be active. The information generated provides an invaluable resource for the research community to develop much needed new control tools for the treatment and eradication of this important and neglected disease.

Journal ArticleDOI
TL;DR: In this paper, the authors add intangible capital to the standard sources-of-growth framework used by the BLS, and find that the inclusion of our list of intangible assets makes a significant difference in the observed patterns of U.S. economic growth.
Abstract: Published macroeconomic data traditionally exclude most intangible investment from measured GDP. This situation is beginning to change, but our estimates suggest that as much as $800 billion is still excluded from U.S. published data (as of 2003), and that this leads to the exclusion of more than $3 trillion of business intangible capital stock. To assess the importance of this omission, we add intangible capital to the standard sources-of-growth framework used by the BLS, and find that the inclusion of our list of intangible assets makes a significant difference in the observed patterns of U.S. economic growth. The rate of change of output per worker increases more rapidly when intangibles are counted as capital, and capital deepening becomes the unambiguously dominant source of growth in labor productivity. The role of multifactor productivity is correspondingly diminished, and labor's income share is found to have decreased significantly over the last 50 years.

Journal ArticleDOI
TL;DR: The GPCP has developed Version 2.1 of its long-term (1979-present) global Satellite-Gauge (SG) data sets to take advantage of the improved GPCC gauge analysis, which is one key input.
Abstract: The GPCP has developed Version 2.1 of its long-term (1979-present) global Satellite-Gauge (SG) data sets to take advantage of the improved GPCC gauge analysis, which is one key input. As well, the OPI estimates used in the pre-SSM/I era have been rescaled to 20 years of the SSM/I-era SG. The monthly, pentad, and daily GPCP products have been entirely reprocessed, continuing to enforce consistency of the submonthly estimates to the monthly. Version 2.1 is close to Version 2, with the global ocean, land, and total values about 0%, 6%, and 2% higher, respectively. The revised long-term global precipitation rate is 2.68 mm/d. The corresponding tropical (25 N-S) increases are 0%, 7%, and 3%. Long-term linear changes in the data tend to be smaller in Version 2.1, but the statistics are sensitive to the threshold for land/ocean separation and use of the pre-SSM/I part of the record.

Journal ArticleDOI
TL;DR: It is found that an individual's CFIP interacts with argument framing and issue involvement to affect attitudes toward the use of EHRs, and results suggest that attitude toward EHR use and CFIP directly influence opt-in behavioral intentions.
Abstract: Within the emerging context of the digitization of health care, electronic health records (EHRs) constitute a significant technological advance in the way medical information is stored, communicated, and processed by the multiple parties involved in health care delivery. However, in spite of the anticipated value potential of this technology, there is widespread concern that consumer privacy issues may impede its diffusion. In this study, we pose the question: Can individuals be persuaded to change their attitudes and opt-in behavioral intentions toward EHRs, and allow their medical information to be digitized even in the presence of significant privacy concerns? To investigate this question, we integrate an individual's concern for information privacy (CFIP) with the elaboration likelihood model (ELM) to examine attitude change and likelihood of opting-in to an EHR system. We theorize that issue involvement and argument framing interact to influence attitude change, and that concern for information privacy further moderates the effects of these variables. We also propose that likelihood of adoption is driven by concern for information privacy and attitude. We test our predictions using an experiment with 366 subjects where we manipulate the framing of the arguments supporting EHRs. We find that an individual's CFIP interacts with argument framing and issue involvement to affect attitudes toward the use of EHRs. In addition, results suggest that attitude toward EHR use and CFIP directly influence opt-in behavioral intentions. An important finding for both theory and practice is that even when people have high concerns for privacy, their attitudes can be positively altered with appropriate message framing. These results as well as other theoretical and practical implications are discussed.

Journal ArticleDOI
A. A. Abdo1, Markus Ackermann2, Marco Ajello2, Magnus Axelsson3  +198 moreInstitutions (28)
TL;DR: In this article, the Fermi Large Area Telescope (Fermi LAT) was used to detect the electron spectrum up to 1 TeV using a diffusive model and a potential local extra component.
Abstract: Designed as a high-sensitivity gamma-ray observatory, the Fermi Large Area Telescope is also an electron detector with a large acceptance exceeding 2 m2 sr at 300 GeV. Building on the gamma-ray analysis, we have developed an efficient electron detection strategy which provides sufficient background rejection for measurement of the steeply falling electron spectrum up to 1 TeV. Our high precision data show that the electron spectrum falls with energy as E-3.0 and does not exhibit prominent spectral features. Interpretations in terms of a conventional diffusive model as well as a potential local extra component are briefly discussed.

Journal ArticleDOI
TL;DR: In this paper, a theoretical framework for understanding plasma turbulence in astrophysical plasmas is presented, motivated by observations of electromagnetic and density fluctuations in the solar wind, interstellar medium and galaxy clusters, as well as by models of particle heating in accretion disks.
Abstract: This paper presents a theoretical framework for understanding plasma turbulence in astrophysical plasmas. It is motivated by observations of electromagnetic and density fluctuations in the solar wind, interstellar medium and galaxy clusters, as well as by models of particle heating in accretion disks. All of these plasmas and many others have turbulent motions at weakly collisional and collisionless scales. The paper focuses on turbulence in a strong mean magnetic field. The key assumptions are that the turbulent fluctuations are small compared to the mean field, spatially anisotropic with respect to it and that their frequency is low compared to the ion cyclotron frequency. The turbulence is assumed to be forced at some system-specific outer scale. The energy injected at this scale has to be dissipated into heat, which ultimately cannot be accomplished without collisions. A kinetic cascade develops that brings the energy to collisional scales both in space and velocity. The nature of the kinetic cascade in various scale ranges depends on the physics of plasma fluctuations that exist there. There are four special scales that separate physically distinct regimes: the electron and ion gyroscales, the mean free path and the electron diffusion scale. In each of the scale ranges separated by these scales, the fully kinetic problem is systematically reduced to a more physically transparent and computationally tractable system of equations, which are derived in a rigorous way. In the inertial range above the ion gyroscale, the kinetic cascade separates into two parts: a cascade of Alfvenic fluctuations and a passive cascade of density and magnetic-field-strength fluctuations. The former are governed by the reduced magnetohydrodynamic (RMHD) equations at both the collisional and collisionless scales; the latter obey a linear kinetic equation along the (moving) field lines associated with the Alfvenic component (in the collisional limit, these compressive fluctuations become the slow and entropy modes of the conventional MHD). In the dissipation range below ion gyroscale, there are again two cascades: the kinetic-Alfven-wave (KAW) cascade governed by two fluid-like electron reduced magnetohydrodynamic (ERMHD) equations and a passive cascade of ion entropy fluctuations both in space and velocity. The latter cascade brings the energy of the inertial-range fluctuations that was Landau-damped at the ion gyroscale to collisional scales in the phase space and leads to ion heating. The KAW energy is similarly damped at the electron gyroscale and converted into electron heat. Kolmogorov-style scaling relations are derived for all of these cascades. The relationship between the theoretical models proposed in this paper and astrophysical applications and observations is discussed in detail.

Journal ArticleDOI
TL;DR: This paper found that individual follower's "power distance" orientation and their group's shared perceptions of tra... using 560 followers and 174 leaders in the People's Republic of China and United States.
Abstract: Using 560 followers and 174 leaders in the People's Republic of China and United States, we found that individual follower's “power distance” orientation and their group's shared perceptions of tra...

Journal ArticleDOI
01 Mar 2009
TL;DR: Data collected in three phases from multiple sources revealed significant differences between management and employee perspectives of HPWSs and how the two perspectives relate to employee individual performance in the service context.
Abstract: Extant research on high-performance work systems (HPWSs) has primarily examined the effects of HPWSs on establishment or firm-level performance from a management perspective in manufacturing settings. The current study extends this literature by differentiating management and employee perspectives of HPWSs and examining how the two perspectives relate to employee individual performance in the service context. Data collected in three phases from multiple sources involving 292 managers, 830 employees, and 1,772 customers of 91 bank branches revealed significant differences between management and employee perspectives of HPWSs. There were also significant differences in employee perspectives of HPWSs among employees of different employment statuses and among employees of the same status. Further, employee perspective of HPWSs was positively related to individual general service performance through the mediation of employee human capital and perceived organizational support and was positively related to individual knowledge-intensive service performance through the mediation of employee human capital and psychological empowerment. At the same time, management perspective of HPWSs was related to employee human capital and both types of service performance. Finally, a branch's overall knowledge-intensive service performance was positively associated with customer overall satisfaction with the branch's service.

Journal ArticleDOI
TL;DR: In this paper, the authors assess 10 start-of-spring (SOS) methods for North America between 1982 and 2006 and find that SOS estimates were more related to the first leaf and first flowers expanding phenological stages.
Abstract: Shifts in the timing of spring phenology are a central feature of global change research. Long-term observations of plant phenology have been used to track vegetation responses to climate variability but are often limited to particular species and locations and may not represent synoptic patterns. Satellite remote sensing is instead used for continental to global monitoring. Although numerous methods exist to extract phenological timing, in particular start-of-spring (SOS), from time series of reflectance data, a comprehensive intercomparison and interpretation of SOS methods has not been conducted. Here, we assess 10 SOS methods for North America between 1982 and 2006. The techniques include consistent inputs from the 8 km Global Inventory Modeling and Mapping Studies Advanced Very High Resolution Radiometer NDVIg dataset, independent data for snow cover, soil thaw, lake ice dynamics, spring streamflow timing, over 16 000 individual measurements of ground-based phenology, and two temperature-driven models of spring phenology. Compared with an ensemble of the 10 SOS methods, we found that individual methods differed in average day-of-year estimates by � 60 days and in standard deviation by � 20 days. The ability of the satellite methods to retrieve SOS estimates was highest in northern latitudes and lowest in arid, tropical, and Mediterranean ecoregions. The ordinal rank of SOS methods varied geographically, as did the relationships between SOS estimates and the cryospheric/hydrologic metrics. Compared with ground observations, SOS estimates were more related to the first leaf and first flowers expanding phenological stages. We found no evidence for time trends in spring arrival from ground- or model-based data; using an ensemble estimate from two methods that were more closely related to ground observations than other methods, SOS

Journal ArticleDOI
TL;DR: The performance of lasso penalized logistic regression in case-control disease gene mapping with a large number of SNPs (single nucleotide polymorphisms) predictors is evaluated and coeliac disease results replicate the previous SNP results and shed light on possible interactions among the SNPs.
Abstract: Motivation: In ordinary regression, imposition of a lasso penalty makes continuous model selection straightforward. Lasso penalized regression is particularly advantageous when the number of predictors far exceeds the number of observations. Method: The present article evaluates the performance of lasso penalized logistic regression in case–control disease gene mapping with a large number of SNPs (single nucleotide polymorphisms) predictors. The strength of the lasso penalty can be tuned to select a predetermined number of the most relevant SNPs and other predictors. For a given value of the tuning constant, the penalized likelihood is quickly maximized by cyclic coordinate ascent. Once the most potent marginal predictors are identified, their two-way and higher order interactions can also be examined by lasso penalized logistic regression. Results: This strategy is tested on both simulated and real data. Our findings on coeliac disease replicate the previous SNP results and shed light on possible interactions among the SNPs. Availability: The software discussed is available in Mendel 9.0 at the UCLA Human Genetics web site. Contact: klange@ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online.

Proceedings ArticleDOI
01 Sep 2009
TL;DR: In this paper, a robust visual tracking method was proposed by casting tracking as a sparse approximation problem in a particle filter framework, where each target candidate is sparsely represented in the space spanned by target templates and trivial templates.
Abstract: In this paper we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, corruption and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target at a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an l 1 -regularized least squares problem. Then the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework in which a particle filter is used for propagating sample distributions over time. Two additional components further improve the robustness of our approach: 1) the nonnegativity constraints that help filter out clutter that is similar to tracked targets in reversed intensity patterns, and 2) a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure. We test the proposed approach on five challenging sequences involving heavy occlusions, drastic illumination changes, and large pose variations. The proposed approach shows excellent performance in comparison with three previously proposed trackers.

Journal ArticleDOI
TL;DR: In this article, the authors established the constructs of calling and vocation within counseling psychology, with an eye toward stimulating research and providing useful practice applications, and explained how the constructs apply to the domain of human work, review empirical and theoretical work related to calling, and differentiate the terms from each other and related constructs.
Abstract: The purpose of this article is to initiate an effort to establish the constructs calling and vocation within counseling psychology First, updated definitions of calling and vocation, developed with an eye toward stimulating research and providing useful practice applications, are proposed Next, the authors explain how the constructs apply to the domain of human work, review empirical and theoretical work related to calling and vocation and their role in human functioning, and differentiate the terms from each other and related constructs Finally, directions for basic and applied research on calling and vocation are suggested, and implications for career counseling practice are outlined

Proceedings ArticleDOI
04 Nov 2009
TL;DR: This work investigates the use of Twitter to build a news processing system, called TwitterStand, from Twitter tweets, to capture tweets that correspond to late breaking news, analogous to a distributed news wire service.
Abstract: Twitter is an electronic medium that allows a large user populace to communicate with each other simultaneously. Inherent to Twitter is an asymmetrical relationship between friends and followers that provides an interesting social network like structure among the users of Twitter. Twitter messages, called tweets, are restricted to 140 characters and thus are usually very focused. We investigate the use of Twitter to build a news processing system, called TwitterStand, from Twitter tweets. The idea is to capture tweets that correspond to late breaking news. The result is analogous to a distributed news wire service. The difference is that the identities of the contributors/reporters are not known in advance and there may be many of them. Furthermore, tweets are not sent according to a schedule: they occur as news is happening, and tend to be noisy while usually arriving at a high throughput rate. Some of the issues addressed include removing the noise, determining tweet clusters of interest bearing in mind that the methods must be online, and determining the relevant locations associated with the tweets.