scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 2010"


Journal ArticleDOI
TL;DR: The results suggest that Cufflinks can illuminate the substantial regulatory flexibility and complexity in even this well-studied model of muscle development and that it can improve transcriptome-based genome annotation.
Abstract: High-throughput mRNA sequencing (RNA-Seq) promises simultaneous transcript discovery and abundance estimation. However, this would require algorithms that are not restricted by prior gene annotations and that account for alternative transcription and splicing. Here we introduce such algorithms in an open-source software program called Cufflinks. To test Cufflinks, we sequenced and analyzed >430 million paired 75-bp RNA-Seq reads from a mouse myoblast cell line over a differentiation time series. We detected 13,692 known transcripts and 3,724 previously unannotated ones, 62% of which are supported by independent expression data or by homologous genes in other species. Over the time series, 330 genes showed complete switches in the dominant transcription start site (TSS) or splice isoform, and we observed more subtle shifts in 1,304 other genes. These results suggest that Cufflinks can illuminate the substantial regulatory flexibility and complexity in even this well-studied model of muscle development and that it can improve transcriptome-based genome annotation.

13,337 citations


Book ChapterDOI
TL;DR: This paper provides a concise overview of time series analysis in the time and frequency domains with lots of references for further reading.
Abstract: Any series of observations ordered along a single dimension, such as time, may be thought of as a time series. The emphasis in time series analysis is on studying the dependence among observations at different points in time. What distinguishes time series analysis from general multivariate analysis is precisely the temporal order imposed on the observations. Many economic variables, such as GNP and its components, price indices, sales, and stock returns are observed over time. In addition to being interested in the contemporaneous relationships among such variables, we are often concerned with relationships between their current and past values, that is, relationships over time.

9,919 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a review of the most important aspects of the different classes of modified gravity theories, including higher-order curvature invariants and metric affine.
Abstract: Modified gravity theories have received increased attention lately due to combined motivation coming from high-energy physics, cosmology, and astrophysics. Among numerous alternatives to Einstein's theory of gravity, theories that include higher-order curvature invariants, and specifically the particular class of $f(R)$ theories, have a long history. In the last five years there has been a new stimulus for their study, leading to a number of interesting results. Here $f(R)$ theories of gravity are reviewed in an attempt to comprehensively present their most important aspects and cover the largest possible portion of the relevant literature. All known formalisms are presented---metric, Palatini, and metric affine---and the following topics are discussed: motivation; actions, field equations, and theoretical aspects; equivalence with other theories; cosmological aspects and constraints; viability criteria; and astrophysical applications.

4,027 citations


Journal ArticleDOI
14 Jan 2010-Nature
TL;DR: An accurate soybean genome sequence will facilitate the identification of the genetic basis of many soybean traits, and accelerate the creation of improved soybean varieties.
Abstract: Soybean (Glycine max) is one of the most important crop plants for seed protein and oil content, and for its capacity to fix atmospheric nitrogen through symbioses with soil-borne microorganisms. We sequenced the 1.1-gigabase genome by a whole-genome shotgun approach and integrated it with physical and high-density genetic maps to create a chromosome-scale draft sequence assembly. We predict 46,430 protein-coding genes, 70% more than Arabidopsis and similar to the poplar genome which, like soybean, is an ancient polyploid (palaeopolyploid). About 78% of the predicted genes occur in chromosome ends, which comprise less than one-half of the genome but account for nearly all of the genetic recombination. Genome duplications occurred at approximately 59 and 13 million years ago, resulting in a highly duplicated genome with nearly 75% of the genes present in multiple copies. The two duplication events were followed by gene diversification and loss, and numerous chromosome rearrangements. An accurate soybean genome sequence will facilitate the identification of the genetic basis of many soybean traits, and accelerate the creation of improved soybean varieties.

3,743 citations


Journal ArticleDOI
TL;DR: The presented lipid FF is developed and applied to phospholipid bilayers with both choline and ethanolamine containing head groups and with both saturated and unsaturated aliphatic chains and is anticipated to be of utility for simulations of pure lipid systems as well as heterogeneous systems including membrane proteins.
Abstract: A significant modification to the additive all-atom CHARMM lipid force field (FF) is developed and applied to phospholipid bilayers with both choline and ethanolamine containing head groups and with both saturated and unsaturated aliphatic chains. Motivated by the current CHARMM lipid FF (C27 and C27r) systematically yielding values of the surface area per lipid that are smaller than experimental estimates and gel-like structures of bilayers well above the gel transition temperature, selected torsional, Lennard-Jones and partial atomic charge parameters were modified by targeting both quantum mechanical (QM) and experimental data. QM calculations ranging from high-level ab initio calculations on small molecules to semiempirical QM studies on a 1,2-dipalmitoyl-sn-phosphatidylcholine (DPPC) bilayer in combination with experimental thermodynamic data were used as target data for parameter optimization. These changes were tested with simulations of pure bilayers at high hydration of the following six lipids: ...

3,489 citations


Journal ArticleDOI
TL;DR: The measurement of the supercurrent through the junction allows one to discern topologically distinct phases and observe a topological phase transition by simply changing the in-plane magnetic field or the gate voltage, which will be a direct demonstration of the existence of Majorana particles.
Abstract: We propose and analyze theoretically an experimental setup for detecting the elusive Majorana particle in semiconductor-superconductor heterostructures. The experimental system consists of one-dimensional semiconductor wire with strong spin-orbit Rashba interaction embedded into a superconducting quantum interference device. We show that the energy spectra of the Andreev bound states at the junction are qualitatively different in topologically trivial (i.e., not containing any Majorana) and nontrivial phases having an even and odd number of crossings at zero energy, respectively. The measurement of the supercurrent through the junction allows one to discern topologically distinct phases and observe a topological phase transition by simply changing the in-plane magnetic field or the gate voltage. The observation of this phase transition will be a direct demonstration of the existence of Majorana particles.

2,702 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used a revised version of the Carnegie-Ames-Stanford-Approach (CASA) biogeochemical model and improved satellite-derived estimates of area burned, fire activity, and plant productivity to calculate fire emissions for the 1997-2009 period on a 0.5° spatial resolution with a monthly time step.
Abstract: . New burned area datasets and top-down constraints from atmospheric concentration measurements of pyrogenic gases have decreased the large uncertainty in fire emissions estimates. However, significant gaps remain in our understanding of the contribution of deforestation, savanna, forest, agricultural waste, and peat fires to total global fire emissions. Here we used a revised version of the Carnegie-Ames-Stanford-Approach (CASA) biogeochemical model and improved satellite-derived estimates of area burned, fire activity, and plant productivity to calculate fire emissions for the 1997–2009 period on a 0.5° spatial resolution with a monthly time step. For November 2000 onwards, estimates were based on burned area, active fire detections, and plant productivity from the MODerate resolution Imaging Spectroradiometer (MODIS) sensor. For the partitioning we focused on the MODIS era. We used maps of burned area derived from the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS) and Along-Track Scanning Radiometer (ATSR) active fire data prior to MODIS (1997–2000) and estimates of plant productivity derived from Advanced Very High Resolution Radiometer (AVHRR) observations during the same period. Average global fire carbon emissions according to this version 3 of the Global Fire Emissions Database (GFED3) were 2.0 Pg C year−1 with significant interannual variability during 1997–2001 (2.8 Pg C year−1 in 1998 and 1.6 Pg C year−1 in 2001). Globally, emissions during 2002–2007 were relatively constant (around 2.1 Pg C year−1) before declining in 2008 (1.7 Pg C year−1) and 2009 (1.5 Pg C year−1) partly due to lower deforestation fire emissions in South America and tropical Asia. On a regional basis, emissions were highly variable during 2002–2007 (e.g., boreal Asia, South America, and Indonesia), but these regional differences canceled out at a global level. During the MODIS era (2001–2009), most carbon emissions were from fires in grasslands and savannas (44%) with smaller contributions from tropical deforestation and degradation fires (20%), woodland fires (mostly confined to the tropics, 16%), forest fires (mostly in the extratropics, 15%), agricultural waste burning (3%), and tropical peat fires (3%). The contribution from agricultural waste fires was likely a lower bound because our approach for measuring burned area could not detect all of these relatively small fires. Total carbon emissions were on average 13% lower than in our previous (GFED2) work. For reduced trace gases such as CO and CH4, deforestation, degradation, and peat fires were more important contributors because of higher emissions of reduced trace gases per unit carbon combusted compared to savanna fires. Carbon emissions from tropical deforestation, degradation, and peatland fires were on average 0.5 Pg C year−1. The carbon emissions from these fires may not be balanced by regrowth following fire. Our results provide the first global assessment of the contribution of different sources to total global fire emissions for the past decade, and supply the community with an improved 13-year fire emissions time series.

2,494 citations


Journal ArticleDOI
TL;DR: In this article, a theoretical model linking empowering leadership with creativity via several intervening variables was built and tested, and they found that, as anticipated, empowering leadership positively affected psychological empowerment, which in turn influenced both intrinsic motivation and creative process engagement.
Abstract: Synthesizing theories of leadership, empowerment, and creativity, this research built and tested a theoretical model linking empowering leadership with creativity via several intervening variables. Using survey data from professional employees and their supervisors in a large information technology company in China, we found that, as anticipated, empowering leadership positively affected psychological empowerment, which in turn influenced both intrinsic motivation and creative process engagement. These latter two variables then had a positive influence on creativity. Empowerment role identity moderated the link between empowering leadership and psychological empowerment, whereas leader encouragement of creativity moderated the connection between psychological empowerment and creative process engagement.

2,123 citations


Journal ArticleDOI
TL;DR: The potential impacts of information and ICTs – especially e-government and social media – on cultural attitudes about transparency are explored.

1,850 citations


Journal ArticleDOI
TL;DR: It is argued that batch effects (as well as other technical and biological artefacts) are widespread and critical to address and experimental and computational approaches for doing so are reviewed.
Abstract: High-throughput technologies are widely used, for example to assay genetic variants, gene and protein expression, and epigenetic modifications. One often overlooked complication with such studies is batch effects, which occur because measurements are affected by laboratory conditions, reagent lots and personnel differences. This becomes a major problem when batch effects are correlated with an outcome of interest and lead to incorrect conclusions. Using both published studies and our own analyses, we argue that batch effects (as well as other technical and biological artefacts) are widespread and critical to address. We review experimental and computational approaches for doing so.

1,768 citations


Journal ArticleDOI
TL;DR: The distinction between explanatory and predictive models is discussed in this paper, and the practical implications of the distinction to each step in the model- ing process are discussed as well as a discussion of the differences that arise in the process of modeling for an explanatory ver- sus a predictive goal.
Abstract: Statistical modeling is a powerful tool for developing and testing theories by way of causal explanation, prediction, and description. In many disciplines there is near-exclusive use of statistical modeling for causal ex- planation and the assumption that models with high explanatory power are inherently of high predictive power. Conflation between explanation and pre- diction is common, yet the distinction must be understood for progressing scientific knowledge. While this distinction has been recognized in the phi- losophy of science, the statistical literature lacks a thorough discussion of the many differences that arise in the process of modeling for an explanatory ver- sus a predictive goal. The purpose of this article is to clarify the distinction between explanatory and predictive modeling, to discuss its sources, and to reveal the practical implications of the distinction to each step in the model- ing process.

Journal ArticleDOI
10 Dec 2010-Science
TL;DR: Scenarios consistently indicate that biodiversity will continue to decline over the 21st century, however, the range of projected changes is much broader than most studies suggest, partly because there are major opportunities to intervene through better policies, but also because of large uncertainties in projections.
Abstract: Quantitative scenarios are coming of age as a tool for evaluating the impact of future socioeconomic development pathways on biodiversity and ecosystem services. We analyze global terrestrial, freshwater, and marine biodiversity scenarios using a range of measures including extinctions, changes in species abundance, habitat loss, and distribution shifts, as well as comparing model projections to observations. Scenarios consistently indicate that biodiversity will continue to decline over the 21st century. However, the range of projected changes is much broader than most studies suggest, partly because there are major opportunities to intervene through better policies, but also because of large uncertainties in projections.

Journal ArticleDOI
TL;DR: Prosumption involves both production and consumption rather than focusing on either one (production) or the other (consumption), and it is maintained that earlier forms of capitalism (producer and consumer capitalism) were themselves characterized by prosumption as mentioned in this paper.
Abstract: This article deals with the rise of prosumer capitalism. Prosumption involves both production and consumption rather than focusing on either one (production) or the other (consumption). It is maintained that earlier forms of capitalism (producer and consumer capitalism) were themselves characterized by prosumption. Given the recent explosion of user-generated content online, we have reason to see prosumption as increasingly central. In prosumer capitalism, control and exploitation take on a different character than in the other forms of capitalism: there is a trend toward unpaid rather than paid labor and toward offering products at no cost, and the system is marked by a new abundance where scarcity once predominated. These trends suggest the possibility of a new, prosumer, capitalism.

Journal ArticleDOI
TL;DR: The heterostructure proposed is a semiconducting thin film sandwiched between an s-wave superconductor and a magnetic insulator which can be used as the platform for topological quantum computation by virtue of the existence of non-Abelian Majorana fermions.
Abstract: We show that a film of a semiconductor in which $s$-wave superconductivity and Zeeman splitting are induced by the proximity effect, supports zero-energy Majorana fermion modes in the ordinary vortex excitations Since time-reversal symmetry is explicitly broken, the edge of the film constitutes a chiral Majorana wire The heterostructure we propose---a semiconducting thin film sandwiched between an $s$-wave superconductor and a magnetic insulator---is a generic system which can be used as the platform for topological quantum computation by virtue of the existence of non-Abelian Majorana fermions

Journal ArticleDOI
A. A. Abdo1, A. A. Abdo2, Markus Ackermann3, Marco Ajello3  +285 moreInstitutions (39)
TL;DR: The first Fermi-LAT catalog (1FGL) as mentioned in this paper contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4 sigma.
Abstract: We present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4 sigma. The 1FGL catalog includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. Care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.


Journal ArticleDOI
17 Jun 2010-Nature
TL;DR: It is shown that mTOR signalling in rat kidney cells is inhibited during initiation of autophagy, but reactivated by prolonged starvation, and this generates proto-lysosomal tubules and vesicles that extrude from autolysosomes and ultimately mature into functional lysosomes, thereby restoring the full complement of lysosity in the cell.
Abstract: Autophagy is an evolutionarily conserved process by which cytoplasmic proteins and organelles are catabolized During starvation, the protein TOR (target of rapamycin), a nutrient-responsive kinase, is inhibited, and this induces autophagy In autophagy, double-membrane autophagosomes envelop and sequester intracellular components and then fuse with lysosomes to form autolysosomes, which degrade their contents to regenerate nutrients Current models of autophagy terminate with the degradation of the autophagosome cargo in autolysosomes, but the regulation of autophagy in response to nutrients and the subsequent fate of the autolysosome are poorly understood Here we show that mTOR signalling in rat kidney cells is inhibited during initiation of autophagy, but reactivated by prolonged starvation Reactivation of mTOR is autophagy-dependent and requires the degradation of autolysosomal products Increased mTOR activity attenuates autophagy and generates proto-lysosomal tubules and vesicles that extrude from autolysosomes and ultimately mature into functional lysosomes, thereby restoring the full complement of lysosomes in the cell-a process we identify in multiple animal species Thus, an evolutionarily conserved cycle in autophagy governs nutrient sensing and lysosome homeostasis during starvation

Journal ArticleDOI
TL;DR: The surprising discovery of high-temperature superconductivity in a material containing a strong magnet (iron) has led to thousands of publications as discussed by the authors, and it becomes clear what we know and where we are headed.
Abstract: The surprising discovery of high-temperature superconductivity in a material containing a strong magnet—iron—has led to thousands of publications. By placing all the data in context, it becomes clear what we know and where we are headed.

Journal ArticleDOI
J. Abadie1, B. P. Abbott1, R. Abbott1, M. R. Abernathy2  +719 moreInstitutions (79)
TL;DR: In this paper, Kalogera et al. presented an up-to-date summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo.
Abstract: We present an up-to-date, comprehensive summary of the rates for all types of compact binary coalescence sources detectable by the initial and advanced versions of the ground-based gravitational-wave detectors LIGO and Virgo. Astrophysical estimates for compact-binary coalescence rates depend on a number of assumptions and unknown model parameters and are still uncertain. The most confident among these estimates are the rate predictions for coalescing binary neutron stars which are based on extrapolations from observed binary pulsars in our galaxy. These yield a likely coalescence rate of 100 Myr−1 per Milky Way Equivalent Galaxy (MWEG), although the rate could plausibly range from 1 Myr−1 MWEG−1 to 1000 Myr−1 MWEG−1 (Kalogera et al 2004 Astrophys. J. 601 L179; Kalogera et al 2004 Astrophys. J. 614 L137 (erratum)). We convert coalescence rates into detection rates based on data from the LIGO S5 and Virgo VSR2 science runs and projected sensitivities for our advanced detectors. Using the detector sensitivities derived from these data, we find a likely detection rate of 0.02 per year for Initial LIGO–Virgo interferometers, with a plausible range between 2 × 10−4 and 0.2 per year. The likely binary neutron–star detection rate for the Advanced LIGO–Virgo network increases to 40 events per year, with a range between 0.4 and 400 per year.

Journal ArticleDOI
TL;DR: In this paper, the authors used the first systematic data sets of CO molecular line emission in z∼ 1 − 3 normal star-forming galaxies (SFGs) for a comparison of the dependence of galaxy-averaged star formation rates on molecular gas masses at low and high redshifts, and in different galactic environments.
Abstract: We use the first systematic data sets of CO molecular line emission in z∼ 1–3 normal star-forming galaxies (SFGs) for a comparison of the dependence of galaxy-averaged star formation rates on molecular gas masses at low and high redshifts, and in different galactic environments. Although the current high-z samples are still small and biased towards the luminous and massive tail of the actively star-forming ‘main-sequence’, a fairly clear picture is emerging. Independent of whether galaxy-integrated quantities or surface densities are considered, low- and high-z SFG populations appear to follow similar molecular gas–star formation relations with slopes 1.1 to 1.2, over three orders of magnitude in gas mass or surface density. The gas-depletion time-scale in these SFGs grows from 0.5 Gyr at z∼ 2 to 1.5 Gyr at z∼ 0. The average corresponds to a fairly low star formation efficiency of 2 per cent per dynamical time. Because star formation depletion times are significantly smaller than the Hubble time at all redshifts sampled, star formation rates and gas fractions are set by the balance between gas accretion from the halo and stellar feedback. In contrast, very luminous and ultraluminous, gas-rich major mergers at both low and high z produce on average four to 10 times more far-infrared luminosity per unit gas mass. We show that only some fraction of this difference can be explained by uncertainties in gas mass or luminosity estimators; much of it must be intrinsic. A possible explanation is a top-heavy stellar mass function in the merging systems but the most likely interpretation is that the star formation relation is driven by global dynamical effects. For a given mass, the more compact merger systems produce stars more rapidly because their gas clouds are more compressed with shorter dynamical times, so that they churn more quickly through the available gas reservoir than the typical normal disc galaxies. When the dependence on galactic dynamical time-scale is explicitly included, disc galaxies and mergers appear to follow similar gas-to-star formation relations. The mergers may be forming stars at slightly higher efficiencies than the discs.

Journal ArticleDOI
Markus Ackermann1, Marco Ajello1, Alice Allafort1, Elisa Antolini2  +211 moreInstitutions (40)
TL;DR: The second catalog of active galactic nuclei (AGNs) detected by the Fermi Large Area Telescope (LAT) in two years of scientific operation is presented in this article, which includes 1017 γ-ray sources located at high Galactic latitudes (|b| > 10°) that are detected with a test statistic (TS) greater than 25 and associated statistically with AGNs.
Abstract: The second catalog of active galactic nuclei (AGNs) detected by the Fermi Large Area Telescope (LAT) in two years of scientific operation is presented. The second LAT AGN catalog (2LAC) includes 1017 γ-ray sources located at high Galactic latitudes (|b| > 10°) that are detected with a test statistic (TS) greater than 25 and associated statistically with AGNs. However, some of these are affected by analysis issues and some are associated with multiple AGNs. Consequently, we define a Clean Sample which includes 886 AGNs, comprising 395 BL Lacertae objects (BL Lac objects), 310 flat-spectrum radio quasars (FSRQs), 157 candidate blazars of unknown type (i.e., with broadband blazar characteristics but with no optical spectral measurement yet), 8 misaligned AGNs, 4 narrow-line Seyfert 1 (NLS1s), 10 AGNs of other types, and 2 starburst galaxies. Where possible, the blazars have been further classified based on their spectral energy distributions (SEDs) as archival radio, optical, and X-ray data permit. While almost all FSRQs have a synchrotron-peak frequency 1015 Hz. The 2LAC represents a significant improvement relative to the first LAT AGN catalog (1LAC), with 52% more associated sources. The full characterization of the newly detected sources will require more broadband data. Various properties, such as γ-ray fluxes and photon power-law spectral indices, redshifts, γ-ray luminosities, variability, and archival radio luminosities and their correlations are presented and discussed for the different blazar classes. The general trends observed in 1LAC are confirmed.

Book ChapterDOI
01 Jan 2010
TL;DR: In this article, the authors review the conceptual basis for the TMPA, summarize the processing sequence, and focus on two new activities: real-time and post-real-time TMPA.
Abstract: The Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) is intended to provide a “best” estimate of quasi-global precipitation from the wide variety of modern satellite-borne precipitation-related sensors. Estimates are provided at relatively fine scales (0.25° × 0.25°, 3-h) in both real and post-real time to accommodate a wide range of researchers. However, the errors inherent in the finest scale estimates are large. The most successful use of the TMPA data is when the analysis takes advantage of the fine-scale data to create time/space averages appropriate to the user’s application. We review the conceptual basis for the TMPA, summarize the processing sequence, and focus on two new activities. First, a recent upgrade for the real-time version incorporates several additional satellite data sources and employs monthly climatological adjustments to approximate the bias characteristics of the research quality post-real-time product. Second, an upgrade for the research quality post-real-time TMPA from Versions 6 to 7 (in beta test at press time) is designed to provide a variety of improvements that increase the list of input data sets and correct several issues. Future enhancements for the TMPA will include improved error estimation, extension to higher latitudes, and a shift to a Lagrangian time interpolation scheme.

Journal ArticleDOI
TL;DR: Work and family topics expanded in scope and coverage during the 2000-2010 decade, spurred by an increased diversity of workplaces and of families, by methodological innovations, and by the growth of communities of scholars focused on the work-family nexus as mentioned in this paper.
Abstract: Scholarship on work and family topics expanded in scope and coverage during the 2000 – 2010 decade, spurred by an increased diversity of workplaces and of families, by methodological innovations, and by the growth of communities of scholars focused on the work-family nexus. We discuss these developments as the backdrop for emergent work-family research on six central topics: (a) gender, time, and the division of labor in the home; (b) paid work: too much or too little; (c) maternal employment and child outcomes; (d) work-family conflict; (e) work, family, stress, and health; and (f) work-family policy. We conclude with a discussion of trends important for research and suggestions about future directions in the work-family arena.

Journal ArticleDOI
26 Jan 2010-ACS Nano
TL;DR: It is found that the gold NPs strongly associate with these essential blood proteins where the binding constant, K, as well as the degree of cooperativity of particle--protein binding (Hill constant, n), depends on particle size and the native protein structure.
Abstract: In order to better understand the physical basis of the biological activity of nanoparticles (NPs) in nanomedicine applications and under conditions of environmental exposure, we performed an array of photophysical measurements to quantify the interaction of model gold NPs having a wide range of NP diameters with common blood proteins. In particular, absorbance, fluorescence quenching, circular dichroism, dynamic light scattering, and electron microscopy measurements were performed on surface-functionalized water-soluble gold NPs having a diameter range from 5 to 100 nm in the presence of common human blood proteins: albumin, fibrinogen, γ-globulin, histone, and insulin. We find that the gold NPs strongly associate with these essential blood proteins where the binding constant, K, as well as the degree of cooperativity of particle−protein binding (Hill constant, n), depends on particle size and the native protein structure. We also find tentative evidence that the model proteins undergo conformational cha...

Journal ArticleDOI
A. A. Abdo1, A. A. Abdo2, Markus Ackermann3, Ivan Agudo4  +270 moreInstitutions (51)
Abstract: We have conducted a detailed investigation of the broadband spectral properties of the gamma-ray selected blazars of the Fermi LAT Bright AGN Sample (LBAS). By combining our accurately estimated Fermi gamma-ray spectra with Swift, radio, infra-red, optical, and other hard X-ray/gamma-ray data, collected within 3 months of the LBAS data taking period, we were able to assemble high-quality and quasi-simultaneous spectral energy distributions (SED) for 48 LBAS blazars. The SED of these gamma-ray sources is similar to that of blazars discovered at other wavelengths, clearly showing, in the usual log nu-log nu F-nu representation, the typical broadband spectral signatures normally attributed to a combination of low-energy synchrotron radiation followed by inverse Compton emission of one or more components. We have used these SED to characterize the peak intensity of both the low-and the high-energy components. The results have been used to derive empirical relationships that estimate the position of the two peaks from the broadband colors (i.e., the radio to optical, alpha(ro), and optical to X-ray, alpha(ox), spectral slopes) and from the gamma-ray spectral index. Our data show that the synchrotron peak frequency (nu(S)(peak)) is positioned between 10(12.5) and 10(14.5) Hz in broad-lined flat spectrum radio quasars (FSRQs) and between 10(13) and 10(17) Hz in featureless BL Lacertae objects. We find that the gamma-ray spectral slope is strongly correlated with the synchrotron peak energy and with the X-ray spectral index, as expected at first order in synchrotron-inverse Compton scenarios. However, simple homogeneous, one-zone, synchrotron self-Compton (SSC) models cannot explain most of our SED, especially in the case of FSRQs and low energy peaked (LBL) BL Lacs. More complex models involving external Compton radiation or multiple SSC components are required to reproduce the overall SED and the observed spectral variability. While more than 50% of known radio bright high energy peaked (HBL) BL Lacs are detected in the LBAS sample, only less than 13% of known bright FSRQs and LBL BL Lacs are included. This suggests that the latter sources, as a class, may be much fainter gamma-ray emitters than LBAS blazars, and could in fact radiate close to the expectations of simple SSC models. We categorized all our sources according to a new physical classification scheme based on the generally accepted paradigm for Active Galactic Nuclei and on the results of this SED study. Since the LAT detector is more sensitive to flat spectrum gamma-ray sources, the correlation between nu(S)(peak) and gamma-ray spectral index strongly favors the detection of high energy peaked blazars, thus explaining the Fermi overabundance of this type of sources compared to radio and EGRET samples. This selection effect is similar to that experienced in the soft X-ray band where HBL BL Lacs are the dominant type of blazars.

Journal ArticleDOI
25 Feb 2010-Nature
TL;DR: This work presents strongly supported results from likelihood, Bayesian and parsimony analyses of over 41 kilobases of aligned DNA sequence from 62 single-copy nuclear protein-coding genes from 75 arthropod species, providing a statistically well-supported phylogenetic framework for the largest animal phylum.
Abstract: The evolutionary interrelationship of arthropods (jointed-legged animals) has long been a matter of dispute. A new phylogeny based on an analysis of over 41,000 base pairs of DNA from 75 species, including representatives of every major arthropod lineage, should ease the way towards a consensus on the matter. The data support the idea that insects are land-living crustaceans, that crustaceans comprise a diverse assemblage of at last three distinct arthropod types, and that myriapods (millipedes and centipedes) are the closest relatives of this great 'pancrustacean' group.

Journal ArticleDOI
TL;DR: The authors use text-based analysis of 10-K product descriptions to examine whether firms exploit product market synergies through asset complementarities in mergers and acquisitions, and find that transactions are more likely between firms that use similar product market language.
Abstract: W e use text-based analysis of 10-K product descriptions to examine whether firms exploit product market synergies through asset complementarities in mergers and acquisitions. Transactions are more likely between firms that use similar product market language. Transaction stock returns, ex post cash flows, and growth in product descriptions all increase for transactions with similar product market language, especially in competitive product markets. These gains are larger when targets are less similar to acquirer rivals and when targets have unique products. Our findings are consistent with firms merging and buying assets to exploit synergies to create new products that increase product differentiation. (JEL G14, G34, L22, L25)

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the reach-scale response of invertebrate species richness to restoration actions that increased channel complexity and found that there is no evidence that physical heterogeneity should not be the primary factor controlling stream biodiversity, particularly in a restoration context.
Abstract: Summary 1. Stream ecosystems are increasingly impacted by multiple stressors that lead to a loss of sensitive species and an overall reduction in diversity. A dominant paradigm in ecological restoration is that increasing habitat heterogeneity (HH) promotes restoration of biodiversity. This paradigm is reflected in stream restoration projects through the common practice of re-configuring channels to add meanders and adding physical structures such as boulders and artificial riffles to restore biodiversity by enhancing structural heterogeneity. 2. To evaluate the validity of this paradigm, we completed an extensive evaluation of published studies that have quantitatively examined the reach-scale response of invertebrate species richness to restoration actions that increased channel complexity/HH. We also evaluated studies that used manipulative or correlative approaches to test for a relationship between physical heterogeneity and invertebrate diversity in streams that were not in need of restoration. 3. We found habitat and macroinvertebrate data for 78 independent stream or river restoration projects described by 18 different author groups in which invertebrate taxa richness data in response to the restoration treatment were available. Most projects were successful in enhancing physical HH; however, only two showed statistically significant increases in biodiversity rendering them more similar to reference reaches or sites. 4. Studies manipulating structural complexity in otherwise healthy streams were generally small in scale and less than half showed a significant positive relationship with invertebrate diversity. Only one-third of the studies that attempted to correlate biodiversity to existing levels of in-stream heterogeneity found a positive relationship. 5. Across all the studies we evaluated, there is no evidence that HH was the primary factor controlling stream invertebrate diversity, particularly in a restoration context. The findings indicate that physical heterogeneity should not be the driving force in selecting restoration approaches for most degraded waterways. Evidence suggests that much more must be done to restore streams impacted by multiple stressors than simply re-configuring channels and enhancing structural complexity with meanders, boulders, wood, or other structures. 6. Thematic implications: as integrators of all activities on the land, streams are sensitive to a host of stressors including impacts from urbanisation, agriculture, deforestation, invasive species, flow regulation, water extractions and mining. The impacts of these individually or in combination typically lead to a decrease in biodiversity because of reduced water quality, biologically unsuitable flow regimes, dispersal barriers, altered inputs of organic matter or sunlight, degraded habitat, etc. Despite the complexity of these stressors, a large number of stream restoration projects focus primarily on physical channel characteristics. We show that this is not a wise investment if ecological recovery is the goal. Managers should critically diagnose the stressors impacting an impaired stream and invest resources first in repairing those problems most likely to limit restoration.

Journal ArticleDOI
TL;DR: Attention Bias Modification Treatment shows promise as a novel treatment for anxiety, and the precise role for ABMT in the broader anxiety-disorder therapeutic armamentarium should be considered.

Journal ArticleDOI
TL;DR: A consensus conference on cardio-renal syndromes (CRS) was held in Venice Italy, in September 2008 under the auspices of the Acute Dialysis Quality Initiative (ADQI).
Abstract: A consensus conference on cardio-renal syndromes (CRS) was held in Venice Italy, in September 2008 under the auspices of the Acute Dialysis Quality Initiative (ADQI). The following topics were matter of discussion after a systematic literature review and the appraisal of the best available evidence: definition/classification system; epidemiology; diagnostic criteria and biomarkers; prevention/protection strategies; management and therapy. The umbrella term CRS was used to identify a disorder of the heart and kidneys whereby acute or chronic dysfunction in one organ may induce acute or chronic dysfunction in the other organ. Different syndromes were identified and classified into five subtypes. Acute CRS (type 1): acute worsening of heart function (AHF–ACS) leading to kidney injury and/or dysfunction. Chronic cardio-renal syndrome (type 2): chronic abnormalities in heart function (CHF-CHD) leading to kidney injury and/or dysfunction. Acute reno-cardiac syndrome (type 3): acute worsening of kidney function (AKI) leading to heart injury and/or dysfunction. Chronic reno-cardiac syndrome (type 4): chronic kidney disease leading to heart injury, disease, and/or dysfunction. Secondary CRS (type 5): systemic conditions leading to simultaneous injury and/or dysfunction of heart and kidney. Consensus statements concerning epidemiology, diagnosis, prevention, and management strategies are discussed in the paper for each of the syndromes.