scispace - formally typeset
Search or ask a question

Showing papers by "University of Arizona published in 2005"


Journal ArticleDOI
TL;DR: In this paper, the authors address conceptual difficulties and highlight areas in need of additional research in social exchange theory, focusing on four issues: the roots of the conceptual ambiguities, norms and rules of exchange, nature of the resources being exchanged, and social exchange relationships.

6,571 citations


Journal ArticleDOI
TL;DR: In this paper, a large-scale correlation function measured from a spectroscopic sample of 46,748 luminous red galaxies from the Sloan Digital Sky Survey is presented, which demonstrates the linear growth of structure by gravitational instability between z ≈ 1000 and the present and confirms a firm prediction of the standard cosmological theory.
Abstract: We present the large-scale correlation function measured from a spectroscopic sample of 46,748 luminous red galaxies from the Sloan Digital Sky Survey. The survey region covers 0.72h −3 Gpc 3 over 3816 square degrees and 0.16 < z < 0.47, making it the best sample yet for the study of large-scale structure. We find a well-detected peak in the correlation function at 100h −1 Mpc separation that is an excellent match to the predicted shape and location of the imprint of the recombination-epoch acoustic oscillations on the low-redshift clustering of matter. This detection demonstrates the linear growth of structure by gravitational instability between z ≈ 1000 and the present and confirms a firm prediction of the standard cosmological theory. The acoustic peak provides a standard ruler by which we can measure the ratio of the distances to z = 0.35 and z = 1089 to 4% fractional accuracy and the absolute distance to z = 0.35 to 5% accuracy. From the overall shape of the correlation function, we measure the matter density mh 2 to 8% and find agreement with the value from cosmic microwave background (CMB) anisotropies. Independent of the constraints provided by the CMB acoustic scale, we find m = 0.273 ±0.025+0.123(1+ w0)+0.137K. Including the CMB acoustic scale, we find that the spatial curvature is K = −0.010 ± 0.009 if the dark energy is a cosmological constant. More generally, our results provide a measurement of cosmological distance, and hence an argument for dark energy, based on a geometric method with the same simple physics as the microwave background anisotropies. The standard cosmological model convincingly passes these new and robust tests of its fundamental properties. Subject headings: cosmology: observations — large-scale structure of the universe — distance scale — cosmological parameters — cosmic microwave background — galaxies: elliptical and lenticular, cD

4,428 citations


Journal ArticleDOI
Takashi Matsumoto1, Jianzhong Wu1, Hiroyuki Kanamori1, Yuichi Katayose1  +262 moreInstitutions (25)
11 Aug 2005-Nature
TL;DR: A map-based, finished quality sequence that covers 95% of the 389 Mb rice genome, including virtually all of the euchromatin and two complete centromeres, and finds evidence for widespread and recurrent gene transfer from the organelles to the nuclear chromosomes.
Abstract: Rice, one of the world's most important food plants, has important syntenic relationships with the other cereal species and is a model plant for the grasses. Here we present a map-based, finished quality sequence that covers 95% of the 389 Mb genome, including virtually all of the euchromatin and two complete centromeres. A total of 37,544 non-transposable-element-related protein-coding genes were identified, of which 71% had a putative homologue in Arabidopsis. In a reciprocal analysis, 90% of the Arabidopsis proteins had a putative homologue in the predicted rice proteome. Twenty-nine per cent of the 37,544 predicted genes appear in clustered gene families. The number and classes of transposable elements found in the rice genome are consistent with the expansion of syntenic regions in the maize and sorghum genomes. We find evidence for widespread and recurrent gene transfer from the organelles to the nuclear chromosomes. The map-based sequence has proven useful for the identification of genes underlying agronomic traits. The additional single-nucleotide polymorphisms and simple sequence repeats identified in our study should accelerate improvements in rice production.

3,423 citations


Journal ArticleDOI
TL;DR: Understanding how BBB TJ might be affected by various factors holds significant promise for the prevention and treatment of neurological diseases.
Abstract: The blood-brain barrier (BBB) is the regulated interface between the peripheral circulation and the central nervous system (CNS). Although originally observed by Paul Ehrlich in 1885, the nature of the BBB was debated well into the 20th century. The anatomical substrate of the BBB is the cerebral microvascular endothelium, which, together with astrocytes, pericytes, neurons, and the extracellular matrix, constitute a "neurovascular unit" that is essential for the health and function of the CNS. Tight junctions (TJ) between endothelial cells of the BBB restrict paracellular diffusion of water-soluble substances from blood to brain. The TJ is an intricate complex of transmembrane (junctional adhesion molecule-1, occludin, and claudins) and cytoplasmic (zonula occludens-1 and -2, cingulin, AF-6, and 7H6) proteins linked to the actin cytoskeleton. The expression and subcellular localization of TJ proteins are modulated by several intrinsic signaling pathways, including those involving calcium, phosphorylation, and G-proteins. Disruption of BBB TJ by disease or drugs can lead to impaired BBB function and thus compromise the CNS. Therefore, understanding how BBB TJ might be affected by various factors holds significant promise for the prevention and treatment of neurological diseases.

2,374 citations


Journal ArticleDOI
TL;DR: Pharmacologically 'bribing' the essential guard duty of the chaperone HSP90 (heat-shock protein of 90 kDa) seems to offer a unique anticancer strategy of considerable promise.
Abstract: Standing watch over the proteome, molecular chaperones are an ancient and evolutionarily conserved class of proteins that guide the normal folding, intracellular disposition and proteolytic turnover of many of the key regulators of cell growth, differentiation and survival. This essential guardian function is subverted during oncogenesis to allow malignant transformation and to facilitate rapid somatic evolution. Pharmacologically 'bribing' the essential guard duty of the chaperone HSP90 (heat-shock protein of 90 kDa) seems to offer a unique anticancer strategy of considerable promise.

2,273 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that at low z < 1, the cosmic star formation rate degrades due to geometry, as the typical cross section of filaments begins to exceed that of the galaxies at their intersections.
Abstract: Not the way one might have thought. In hydrodynamic simulations of galaxy formation, some gas follows the traditionally envisioned route, shock heating to the halo virial temperature before cooling to the much lower temperature of the neutral ISM. But most gas enters galaxies without ever heating close to the virial temperature, gaining thermal energy from weak shocks and adiabatic compression, and radiating it just as quickly. This “cold mode” accretion is channeled along filaments, while the conventional, “hot mode” accretion is quasi-spherical. Cold mode accretion dominates high redshift growth by a substantial factor, while at z < 1 the overall accretion rate declines and hot mode accretion has greater relative importance. The decline of the cosmic star formation rate at low z is driven largely by geometry, as the typical cross section of filaments begins to exceed that of the galaxies at their intersections.

2,155 citations


Journal ArticleDOI
TL;DR: The results quantify a trigger leading to rapid, drought-induced die-off of overstory woody plants at subcontinental scale and highlight the potential for such die-offs to be more severe and extensive for future global-change-type drought under warmer conditions.
Abstract: Future drought is projected to occur under warmer temperature conditions as climate change progresses, referred to here as global-change-type drought, yet quantitative assessments of the triggers and potential extent of drought-induced vegetation die-off remain pivotal uncertainties in assessing climate-change impacts. Of particular concern is regional-scale mortality of overstory trees, which rapidly alters ecosystem type, associated ecosystem properties, and land surface conditions for decades. Here, we quantify regional-scale vegetation die-off across southwestern North American woodlands in 2002-2003 in response to drought and associated bark beetle infestations. At an intensively studied site within the region, we quantified that after 15 months of depleted soil water content, >90% of the dominant, overstory tree species (Pinus edulis, a pinon) died. The die-off was reflected in changes in a remotely sensed index of vegetation greenness (Normalized Difference Vegetation Index), not only at the intensively studied site but also across the region, extending over 12,000 km2 or more; aerial and field surveys confirmed the general extent of the die-off. Notably, the recent drought was warmer than the previous subcontinental drought of the 1950s. The limited, available observations suggest that die-off from the recent drought was more extensive than that from the previous drought, extending into wetter sites within the tree species' distribution. Our results quantify a trigger leading to rapid, drought-induced die-off of overstory woody plants at subcontinental scale and highlight the potential for such die-off to be more severe and extensive for future global-change-type drought under warmer conditions.

1,992 citations


Journal ArticleDOI
TL;DR: NVS, the Newest Vital Sign, is suitable for use as a quick screening test for limited literacy in primary health care settings and correlates with the Test of Functional Health Literacy in Adults.
Abstract: PURPOSE Current health literacy screening instruments for health care settings are either too long for routine use or available only in English. Our objective was to develop a quick and accurate screening test for limited literacy available in Eng- lish and Spanish. METHODS We administered candidate items for the new instrument and also the Test of Functional Health Literacy in Adults (TOFHLA) to English-speaking and Spanish-speaking primary care patients. We measured internal consistency with Cronbach's and assessed criterion validity by measuring correlations with TOFHLA scores. Using TOFLHA scores 0.76 in English and 0.69 in Spanish) and correlates with the TOFHLA. Area under the ROC curve is 0.88 for English and 0.72 for Spanish ver- sions. Patients with more than 4 correct responses are unlikely to have low literacy, whereas fewer than 4 correct answers indicate the possibility of limited literacy. CONCLUSION NVS is suitable for use as a quick screening test for limited literacy in primary health care settings.

1,941 citations


Journal ArticleDOI
TL;DR: Theoretical perspectives generate a novel hypothesis: that there is a curvilinear, U-shaped relation between early exposures to adversity and the development of stress-reactive profiles, with high reactivity phenotypes disproportionately emerging within both highly stressful and highly protected early social environments.
Abstract: Biological reactivity to psychological stressors comprises a complex, integrated, and highly conserved repertoire of central neural and peripheral neuroendocrine responses designed to prepare the organism for challenge or threat. Developmental experience plays a role, along with heritable, polygenic variation, in calibrating the response dynamics of these systems, with early adversity biasing their combined effects toward a profile of heightened or prolonged reactivity. Conventional views of such high reactivity suggest that it is an atavistic and pathogenic legacy of an evolutionary past in which threats to survival were more prevalent and severe. Recent evidence, however, indicates that (a) stress reactivity is not a unitary process, but rather incorporates counterregulatory circuits serving to modify or temper physiological arousal, and (b) the effects of high reactivity phenotypes on psychiatric and biomedical outcomes are bivalent, rather than univalent, in character, exerting both risk-augmenting and risk-protective effects in a context-dependent manner. These observations suggest that heightened stress reactivity may reflect, not simply exaggerated arousal under challenge, but rather an increased biological sensitivity to context, with potential for negative health effects under conditions of adversity and positive effects under conditions of support and protection. From an evolutionary perspective, the developmental plasticity of the stress response systems, along with their structured, context-dependent effects, suggests that these systems may constitute conditional adaptations: evolved psychobiological mechanisms that monitor specific features of childhood environments as a basis for calibrating the development of stress response systems to adaptively match those environments. Taken together, these theoretical perspectives generate a novel hypothesis: that there is a curvilinear, U-shaped relation between early exposures to adversity and the development of stress-reactive profiles, with high reactivity phenotypes disproportionately emerging within both highly stressful and highly protected early social environments.

1,725 citations


Journal ArticleDOI
TL;DR: In this paper, a double-blind placebo-controlled phase II study was done to assess the efficacy of a prophylactic quadrivalent vaccine targeting the human papillomavirus (HPV) types associated with 70% of cervical cancers (types 16 and 18) and with 90% of genital warts (types 6 and 11).
Abstract: Summary Background A randomised double-blind placebo-controlled phase II study was done to assess the efficacy of a prophylactic quadrivalent vaccine targeting the human papillomavirus (HPV) types associated with 70% of cervical cancers (types 16 and 18) and with 90% of genital warts (types 6 and 11). Methods 277 young women (mean age 20·2 years [SD 1·7]) were randomly assigned to quadrivalent HPV (20 μg type 6, 40 μg type 11, 40 μg type 16, and 20 μg type 18) L1 virus-like-particle (VLP) vaccine and 275 (mean age 20·0 years [1·7]) to one of two placebo preparations at day 1, month 2, and month 6. For 36 months, participants underwent regular gynaecological examinations, cervicovaginal sampling for HPV DNA, testing for serum antibodies to HPV, and Pap testing. The primary endpoint was the combined incidence of infection with HPV 6, 11, 16, or 18, or cervical or external genital disease (ie, persistent HPV infection, HPV detection at the last recorded visit, cervical intraepithelial neoplasia, cervical cancer, or external genital lesions caused by the HPV types in the vaccine). Main analyses were done per protocol. Findings Combined incidence of persistent infection or disease with HPV 6, 11, 16, or 18 fell by 90% (95% CI 71–97, p Interpretation A vaccine targeting HPV types 6, 11, 16, 18 could substantially reduce the acquisition of infection and clinical disease caused by common HPV types. Published online April 7, 2005 DOI 10.1016/S1470-2045(05)70101-7

1,627 citations


Journal ArticleDOI
TL;DR: It is shown that up to 80% energy savings is achievable over nonoptimized systems and that the benefit of coding varies with the transmission distance and the underlying modulation schemes.
Abstract: Wireless systems where the nodes operate on batteries so that energy consumption must be minimized while satisfying given throughput and delay requirements are considered. In this context, the best modulation strategy to minimize the total energy consumption required to send a given number of bits is analyzed. The total energy consumption includes both the transmission energy and the circuit energy consumption. For uncoded systems, by optimizing the transmission time and the modulation parameters, it is shown that up to 80% energy savings is achievable over nonoptimized systems. For coded systems, it is shown that the benefit of coding varies with the transmission distance and the underlying modulation schemes.

Journal ArticleDOI
21 Apr 2005-Nature
TL;DR: The draft sequence of the M. grisea genome is reported, reflecting the clonal nature of this fungus imposed by widespread rice cultivation and analysis of the gene set provides an insight into the adaptations required by a fungus to cause disease.
Abstract: Magnaporthe grisea is the most destructive pathogen of rice worldwide and the principal model organism for elucidating the molecular basis of fungal disease of plants. Here, we report the draft sequence of the M. grisea genome. Analysis of the gene set provides an insight into the adaptations required by a fungus to cause disease. The genome encodes a large and diverse set of secreted proteins, including those defined by unusual carbohydrate-binding domains. This fungus also possesses an expanded family of G-protein-coupled receptors, several new virulence-associated genes and large suites of enzymes involved in secondary metabolism. Consistent with a role in fungal pathogenesis, the expression of several of these genes is upregulated during the early stages of infection-related development. The M. grisea genome has been subject to invasion and proliferation of active transposable elements, reflecting the clonal nature of this fungus imposed by widespread rice cultivation.

Journal ArticleDOI
TL;DR: The concept of scale in human geography has been profoundly transformed over the past 20 years and despite the insights that both empirical and theoretical research on scale have generated, there is today no consensus on what is meant by the term or how it should be operationalized.
Abstract: The concept of scale in human geography has been profoundly transformed over the past 20 years. And yet, despite the insights that both empirical and theoretical research on scale have generated, there is today no consensus on what is meant by the term or how it should be operationalized. In this paper we critique the dominant – hierarchical – conception of scale, arguing it presents a number of problems that cannot be overcome simply by adding on to or integrating with network theorizing. We thereby propose to eliminate scale as a concept in human geography. In its place we offer a different ontology, one that so flattens scale as to render the concept unnecessary. We conclude by addressing some of the political implications of a human geography without scale.

Journal ArticleDOI
TL;DR: The New York University Value-Added Galaxy Catalog (NYU-VAGC) as mentioned in this paper is a catalog of local galaxies (mostly below z ≈ 0.3) based on a set of publicly released surveys matched to the SDSS Data Release 2.
Abstract: Here we present the New York University Value-Added Galaxy Catalog (NYU-VAGC), a catalog of local galaxies (mostly below z ≈ 0.3) based on a set of publicly released surveys matched to the Sloan Digital Sky Survey (SDSS) Data Release 2. The photometric catalog consists of 693,319 galaxies, QSOs, and stars; 343,568 of these have redshift determinations, mostly from the SDSS. Excluding areas masked by bright stars, the photometric sample covers 3514 deg2, and the spectroscopic sample covers 2627 deg2 (with about 85% completeness). Earlier, proprietary versions of this catalog have formed the basis of many SDSS investigations of the power spectrum, correlation function, and luminosity function of galaxies. Future releases will follow future public releases of the SDSS. The catalog includes matches to the Two Micron All Sky Survey Point Source Catalog and Extended Source Catalog, the IRAS Point Source Catalog Redshift Survey, the Two-Degree Field Galaxy Redshift Survey, the Third Reference Catalogue of Bright Galaxies, and the Faint Images of the Radio Sky at Twenty cm survey. We calculate and compile derived quantities from the images and spectra of the galaxies in the catalogs (for example, K-corrections and structural parameters for the galaxies). The SDSS catalog presented here is photometrically calibrated in a more consistent way than that distributed by the SDSS Data Release 2 Archive Servers and is thus more appropriate for large-scale structure statistics, reducing systematic calibration errors across the sky from ~2% to ~1%. We include an explicit description of the geometry of the catalog, including all imaging and targeting information as a function of sky position. Finally, we have performed eyeball quality checks on a large number of objects in the catalog in order to flag errors (such as errors in deblending). This catalog is complementary to the SDSS Archive Servers in that NYU-VAGC's calibration, geometric description, and conveniently small size are specifically designed for studying galaxy properties and large-scale structure statistics using the SDSS spectroscopic catalog.

Journal ArticleDOI
TL;DR: The first quantitative test of the recently developed model of adoption of technology in households (MATH) showed that the integrated model, including MATH constructs and life cycle characteristics, explained 74 percent of the variance in intention to adopt a PC for home use.
Abstract: Individual adoption of technology has been studied extensively in the workplace. Far less attention has been paid to adoption of technology in the household. In this paper, we performed the first quantitative test of the recently developed model of adoption of technology in households (MATH). Further, we proposed and tested a theoretical extension of MATH by arguing that key demographic characteristics that vary across different life cycle stages would play moderating roles. Survey responses were collected from 746 U.S. households that had not yet adopted a personal computer. The results showed that the integrated model, including MATH constructs and life cycle characteristics, explained 74 percent of the variance in intention to adopt a PC for home use, a significant increase over baseline MATH that explained 50 percent of the variance. Finally, we compared the importance of various factors across household life cycle stages and gained a more refined understanding of the moderating role of household life cycle stage.

Journal ArticleDOI
TL;DR: The D1 model best predicts the values for observed health states using the time trade-off method, and represents a significant enhancement of the EQ-5D's utility for health status assessment and economic analysis in the US.
Abstract: Purpose: The EQ-5D is a brief, multiattribute, preference-based health status measure. This article describes the development of a statistical model for generating US population-based EQ-5D preference weights. Methods: A multistage probability sample was selected from the US adult civilian noninstitutional population. Respondents valued 13 of 243 EQ-5D health states using the time trade-off (TTO) method. Data for 12 states were used in econometric modeling. The TTO valuations were linearly transformed to lie on the interval 1, 1. Methods were investigated to account for interaction effects caused by having problems in multiple EQ-5D dimensions. Several alternative model specifications (eg, pooled least squares, random effects) also were considered. A modified split-sample approach was used to evaluate the predictive accuracy of the models. All statistical analyses took into account the clustering and disproportionate selection probabilities inherent in our sampling design. Results: Our D1 model for the EQ-5D included ordinal terms to capture the effect of departures from perfect health as well as interaction effects. A random effects specification of the D1 model yielded a good fit for the observed TTO data, with an overall R 2 of 0.38, a mean absolute error of 0.025, and 7 prediction errors exceeding 0.05 in absolute magnitude. Conclusions: The D1 model best predicts the values for observed health states. The resulting preference weight estimates represent a significant enhancement of the EQ-5D’s utility for health status assessment and economic analysis in the US.

Journal ArticleDOI
04 Aug 2005-Nature
TL;DR: It is demonstrated that NK cells acquire functional competence through ‘licensing’ by self-MHC molecules, which results in two types of self-tolerant NK cells—licensed or unlicensed—and may provide new insights for exploiting NK cells in immunotherapy.
Abstract: Self versus non-self discrimination is a central theme in biology from plants to vertebrates, and is particularly relevant for lymphocytes that express receptors capable of recognizing self-tissues and foreign invaders. Comprising the third largest lymphocyte population, natural killer (NK) cells recognize and kill cellular targets and produce pro-inflammatory cytokines. These potentially self-destructive effector functions can be controlled by inhibitory receptors for the polymorphic major histocompatibility complex (MHC) class I molecules that are ubiquitously expressed on target cells. However, inhibitory receptors are not uniformly expressed on NK cells, and are germline-encoded by a set of polymorphic genes that segregate independently from MHC genes. Therefore, how NK-cell self-tolerance arises in vivo is poorly understood. Here we demonstrate that NK cells acquire functional competence through 'licensing' by self-MHC molecules. Licensing involves a positive role for MHC-specific inhibitory receptors and requires the cytoplasmic inhibitory motif originally identified in effector responses. This process results in two types of self-tolerant NK cells--licensed or unlicensed--and may provide new insights for exploiting NK cells in immunotherapy. This self-tolerance mechanism may be more broadly applicable within the vertebrate immune system because related germline-encoded inhibitory receptors are widely expressed on other immune cells.

Journal ArticleDOI
22 Dec 2005-Neuron
TL;DR: It is demonstrated that Aβ levels in the brain interstitial fluid are dynamically and directly influenced by synaptic activity on a timescale of minutes to hours, and it is suggested that synaptic activity may modulate a neurodegenerative disease process, in this case by influencing Aβ metabolism and ultimately region-specific Aβ deposition.

Journal ArticleDOI
TL;DR: In this paper, the authors derived stellar metallicities, light-weighted ages and stellar masses for a magnitude-limited sample of 175 128 galaxies drawn from the Sloan Digital Sky Survey Data Release Two (SDSS DR2).
Abstract: We derive stellar metallicities, light-weighted ages and stellar masses for a magnitude-limited sample of 175 128 galaxies drawn from the Sloan Digital Sky Survey Data Release Two (SDSS DR2). We compute the median-likelihood estimates of these parameters using a large library of model spectra at medium‐high resolution, covering a comprehensive range of star formation histories. The constraints we derive are set by the simultaneous fit of five spectral absorption features, which are well reproduced by our population synthesis models. By design, these constraints depend only weakly on the α/Fe element abundance ratio. Our sample includes galaxies of all types spanning the full range in star formation activity, from dormant early-type to actively star-forming galaxies. By analysing a subsample of 44 254 high-quality spectra, we show that, in the mean, galaxies follow a sequence of increasing stellar metallicity, age and stellar mass at increasing 4000-A break strength. For galaxies of intermediate mass, stronger Balmer absorption at fixed 4000-A break strength is associated with higher metallicity and younger age. We investigate how stellar metallicity and age depend on total galaxy stellar mass. Low-mass galaxies are typically young and metal-poor, massive galaxies old and metalrich, with a rapid transition between these regimes over the stellar mass range 3 × 10 9 M ∗ 3 × 10 10 M� . Both high- and low-concentration galaxies follow these relations, but there is a large dispersion in stellar metallicity at fixed stellar mass, especially for low-concentration galaxies of intermediate mass. Despite the large scatter, the relation between stellar metallicity and stellar mass is similar to the correlation between gas-phase oxygen abundance and stellar mass for star-forming galaxies. This is confirmed by the good correlation between stellar metallicity and gas-phase oxygen abundance for galaxies with both measures. The substantial range in stellar metallicity at fixed gas-phase oxygen abundance suggests that gas ejection and/or accretion are important factors in galactic chemical evolution. Ke yw ords: galaxies: evolution ‐ galaxies: formation ‐ galaxies: stellar content.

Journal ArticleDOI
20 May 2005-Science
TL;DR: Tsunami and geodetic observations indicate that additional slow slip occurred in the north over a time scale of 50 minutes or longer, and fault slip of up to 15 meters occurred near Banda Aceh, Sumatra, but to the north, along the Nicobar and Andaman Islands, rapid slip was much smaller.
Abstract: The two largest earthquakes of the past 40 years ruptured a 1600-kilometer-long portion of the fault boundary between the Indo-Australian and southeastern Eurasian plates on 26 December 2004 [seismic moment magnitude (Mw) = 9.1 to 9.3] and 28 March 2005 (Mw = 8.6). The first event generated a tsunami that caused more than 283,000 deaths. Fault slip of up to 15 meters occurred near Banda Aceh, Sumatra, but to the north, along the Nicobar and Andaman Islands, rapid slip was much smaller. Tsunami and geodetic observations indicate that additional slow slip occurred in the north over a time scale of 50 minutes or longer.

Journal ArticleDOI
TL;DR: In this article, the Spitzer Space Telescope mid-infrared colors, derived from the IRAC Shallow Survey, of nearly 10,000 spectroscopically identified sources from the AGN and Galaxy Evolution Survey, were used to identify active galaxies.
Abstract: Mid-infrared photometry provides a robust technique for identifying active galaxies. While the ultraviolet to mid-infrared (λ 5 μm) continuum of stellar populations is dominated by the composite blackbody curve and peaks at approximately 1.6 μm, the ultraviolet to mid-infrared continuum of active galactic nuclei (AGNs) is dominated by a power law. Consequently, with a sufficient wavelength baseline, one can easily distinguish AGNs from stellar populations. Mirroring the tendency of AGNs to be bluer than galaxies in the ultraviolet, where galaxies (and stars) sample the blue, rising portion of stellar spectra, AGNs tend to be redder than galaxies in the mid-infrared, where galaxies sample the red, falling portion of the stellar spectra. We report on Spitzer Space Telescope mid-infrared colors, derived from the IRAC Shallow Survey, of nearly 10,000 spectroscopically identified sources from the AGN and Galaxy Evolution Survey. On the basis of this spectroscopic sample, we find that simple mid-infrared color criteria provide remarkably robust separation of active galaxies from normal galaxies and Galactic stars, with over 80% completeness and less than 20% contamination. Considering only broad-lined AGNs, these mid-infrared color criteria identify over 90% of spectroscopically identified quasars and Seyfert 1 galaxies. Applying these color criteria to the full imaging data set, we discuss the implied surface density of AGNs and find evidence for a large population of optically obscured active galaxies.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed a sample of ~2600 Spitzer MIPS 24 μm sources and located in the Chandra Deep Field-South to characterize the evolution of the comoving infrared (IR) energy density of the universe up to z ~ 1.
Abstract: We analyze a sample of ~2600 Spitzer MIPS 24 μm sources brighter than ~80 μJy and located in the Chandra Deep Field-South to characterize the evolution of the comoving infrared (IR) energy density of the universe up to z ~ 1. Using published ancillary optical data, we first obtain a nearly complete redshift determination for the 24 μm objects associated with R 24 mag counterparts at z 1. These sources represent ~55%-60% of the total MIPS 24 μm population with f24 μm 80 μJy, the rest of the sample likely lying at higher redshifts. We then determine an estimate of their total IR luminosities using various libraries of IR spectral energy distributions. We find that the 24 μm population at 0.5 z 1 is dominated by "luminous infrared galaxies" (i.e., 1011 L☉ ≤ LIR ≤ 1012 L☉), the counterparts of which appear to be also luminous at optical wavelengths and tend to be more massive than the majority of optically selected galaxies. A significant number of fainter sources (5 × 1010 L☉ LIR ≤ 1011 L☉) are also detected at similar distances. We finally derive 15 μm and total IR luminosity functions (LFs) up to z ~ 1. In agreement with the previous results from the Infrared Space Observatory (ISO) and SCUBA and as expected from the MIPS source number counts, we find very strong evolution of the contribution of the IR-selected population with look-back time. Pure evolution in density is firmly excluded by the data, but we find considerable degeneracy between strict evolution in luminosity and a combination of increases in both density and luminosity [L ∝ (1 + z), ∝ (1 + z)]. A significant steepening of the faint-end slope of the IR luminosity function is also unlikely, as it would overproduce the faint 24 μm source number counts. Our results imply that the comoving IR energy density of the universe evolves as (1 + z)3.9±0.4 up to z ~ 1 and that galaxies luminous in the infrared (i.e., LIR ≥ 1011 L☉) are responsible for 70% ± 15% of this energy density at z ~ 1. Taking into account the contribution of the UV luminosity evolving as (1 + z)~2.5, we infer that these IR-luminous sources dominate the star-forming activity beyond z ~ 0.7. The uncertainties affecting these conclusions are largely dominated by the errors in the k-corrections used to convert 24 μm fluxes into luminosities.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between the characteristic broad-line region size (RBLR) and the Balmer emission-line, X-ray, UV, and optical continuum luminosities.
Abstract: We reinvestigate the relationship between the characteristic broad-line region size (RBLR) and the Balmer emission-line, X-ray, UV, and optical continuum luminosities. Our study makes use of the best available determinations of RBLR for a large number of active galactic nuclei (AGNs) from Peterson et al. Using their determinations of RBLR for a large sample of AGNs and two different regression methods, we investigate the robustness of our correlation results as a function of data subsample and regression technique. Although small systematic differences were found depending on the method of analysis, our results are generally consistent. Assuming a power-law relation RBLR ∝ Lα, we find that the mean best-fitting α is about 0.67 ± 0.05 for the optical continuum and the broad Hβ luminosity, about 0.56 ± 0.05 for the UV continuum luminosity, and about 0.70 ± 0.14 for the X-ray luminosity. We also find an intrinsic scatter of ~40% in these relations. The disagreement of our results with the theoretical expected slope of 0.5 indicates that the simple assumption of all AGNs having on average the same ionization parameter, BLR density, column density, and ionizing spectral energy distribution is not valid and there is likely some evolution of a few of these characteristics along the luminosity scale.

Journal ArticleDOI
08 Dec 2005-Nature
TL;DR: Direct atmospheric measurements from the Gas Chromatograph Mass Spectrometer (GCMS), including altitude profiles of the constituents, isotopic ratios and trace species (including organic compounds), were reported, confirming the primary constituents were confirmed to be nitrogen and methane.
Abstract: Saturn's largest moon, Titan, remains an enigma, explored only by remote sensing from Earth, and by the Voyager and Cassini spacecraft. The most puzzling aspects include the origin of the molecular nitrogen and methane in its atmosphere, and the mechanism(s) by which methane is maintained in the face of rapid destruction by photolysis. The Huygens probe, launched from the Cassini spacecraft, has made the first direct observations of the satellite's surface and lower atmosphere. Here we report direct atmospheric measurements from the Gas Chromatograph Mass Spectrometer (GCMS), including altitude profiles of the constituents, isotopic ratios and trace species (including organic compounds). The primary constituents were confirmed to be nitrogen and methane. Noble gases other than argon were not detected. The argon includes primordial 36Ar, and the radiogenic isotope 40Ar, providing an important constraint on the outgassing history of Titan. Trace organic species, including cyanogen and ethane, were found in surface measurements.

Book
25 Dec 2005
TL;DR: This book presents a meta-analysing of the constructed world in the context of two-Level Theories, a model based on the model developed by J. Joseph Hewitt in 1993.
Abstract: List of Tables vii List of Figures ix Acknowledgments xi Chapter One: Introduction 1 PART ONE: THEORETICAL, STRUCTURAL, AND EMPIRICAL ANALYSIS OF CONCEPTS 25 Chapter Two: Structuring and Theorizing Concepts 27 Chapter Three: Concept Intension and Extension 69 Chapter Four: Increasing Concept-Measure Consistency 95 Chapter Five: Substitutability and Weakest-Link Measures with William F. Dixon 129 PART TWO: CONCEPTS AND CASE SELECTION 157 Chapter Six: Concepts and Selecting (on) the Dependent Variable with J. Joseph Hewitt 159 Chapter Seven: Negative Case Selection: The Possibility Principle with James Mahoney 177 Chapter Eight: Concepts and Choosing Populations with J.Joseph Hewitt 211 PART THREE: CONCEPTS INTHEORIES 235 Chapter Nine: Concepts in Theories: Two-Level Theories with James Mahoney 237 References 269 Exercises and Web Site 289 Index 291

Journal ArticleDOI
TL;DR: A dual state–parameter estimation approach is presented based on the Ensemble Kalman Filter (EnKF) for sequential estimation of both parameters and state variables of a hydrologic model.

Journal ArticleDOI
TL;DR: This work builds a framework of functional hypotheses of complex signal evolution based on content-driven (ultimate) and efficacy-driven selection pressures (sensu Guilford and Dawkins 1991), and point out key predictions for various hypotheses and discuss different approaches to uncovering complex signal function.
Abstract: The basic building blocks of communication are signals, assembled in various sequences and combinations, and used in virtually all inter- and intra-specific interactions. While signal evolution has long been a focus of study, there has been a recent resurgence of interest and research in the complexity of animal displays. Much past research on signal evolution has focused on sensory specialists, or on single signals in isolation, but many animal displays involve complex signaling, or the combination of more than one signal or related component, often serially and overlapping, frequently across multiple sensory modalities. Here, we build a framework of functional hypotheses of complex signal evolution based on content-driven (ultimate) and efficacy-driven (proximate) selection pressures (sensu Guilford and Dawkins 1991). We point out key predictions for various hypotheses and discuss different approaches to uncovering complex signal function. We also differentiate a category of hypotheses based on inter-signal interactions. Throughout our review, we hope to make three points: (1) a complex signal is a functional unit upon which selection can act, (2) both content and efficacy-driven selection pressures must be considered when studying the evolution of complex signaling, and (3) individual signals or components do not necessarily contribute to complex signal function independently, but may interact in a functional way.

Journal ArticleDOI
TL;DR: The evidence suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences, no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial.
Abstract: We review lesion and neuroimaging evidence on the role of the hippocampus, and other structures, in retention and retrieval of recent and remote memories. We examine episodic, semantic and spatial memory, and show that important distinctions exist among different types of these memories and the structures that mediate them. We argue that retention and retrieval of detailed, vivid autobiographical memories depend on the hippocampal system no matter how long ago they were acquired. Semantic memories, on the other hand, benefit from hippocampal contribution for some time before they can be retrieved independently of the hippocampus. Even semantic memories, however, can have episodic elements associated with them that continue to depend on the hippocampus. Likewise, we distinguish between experientially detailed spatial memories (akin to episodic memory) and more schematic memories (akin to semantic memory) that are sufficient for navigation but not for re-experiencing the environment in which they were acquired. Like their episodic and semantic counterparts, the former type of spatial memory is dependent on the hippocampus no matter how long ago it was acquired, whereas the latter can survive independently of the hippocampus and is represented in extra-hippocampal structures. In short, the evidence reviewed suggests strongly that the function of the hippocampus (and possibly that of related limbic structures) is to help encode, retain, and retrieve experiences , no matter how long ago the events comprising the experience occurred, and no matter whether the memories are episodic or spatial. We conclude that the evidence favours a multiple trace theory (MTT) of memory over two other models: (1) traditional consolidation models which posit that the hippocampus is a time-limited memory structure for all forms of memory; and (2) versions of cognitive map theory which posit that the hippocampus is needed for representing all forms of allocentric space in memory.

Journal ArticleDOI
TL;DR: In this article, the authors investigated temporal and spatial patterns of vegetation greenness and rainfall variability in the African Sahel and their interrelationships based on analyses of Normalized Difference Vegetation Index (NDVI) time series for the period 1982-2003 and gridded satellite rainfall estimates.
Abstract: Contrary to assertions of widespread irreversible desertification in the African Sahel, a recent increase in seasonal greenness over large areas of the Sahel has been observed, which has been interpreted as a recovery from the great Sahelian droughts. This research investigates temporal and spatial patterns of vegetation greenness and rainfall variability in the African Sahel and their interrelationships based on analyses of Normalized Difference Vegetation Index (NDVI) time series for the period 1982–2003 and gridded satellite rainfall estimates. While rainfall emerges as the dominant causative factor for the increase in vegetation greenness, there is evidence of another causative factor, hypothetically a human-induced change superimposed on the climate trend. r 2005 Elsevier Ltd. All rights reserved.

Journal ArticleDOI
TL;DR: Lenalidomide has hematologic activity in patients with low-risk myelodysplastic syndromes who have no response to erythropoietin or who are unlikely to benefit from conventional therapy.
Abstract: background Ineffective erythropoiesis is the hallmark of myelodysplastic syndromes. Management of the anemia caused by ineffective erythropoiesis is difficult. In patients with myelodysplastic syndromes and symptomatic anemia, we evaluated the safety and hematologic activity of lenalidomide, a novel analogue of thalidomide. methods Forty-three patients with transfusion-dependent or symptomatic anemia received lenalidomide at doses of 25 or 10 mg per day or of 10 mg per day for 21 days of every 28day cycle. All patients either had had no response to recombinant erythropoietin or had a high endogenous erythropoietin level with a low probability of benefit from such therapy. The response to treatment was assessed after 16 weeks. results Neutropenia and thrombocytopenia, the most common adverse events, with respective frequencies of 65 percent and 74 percent, necessitated the interruption of treatment or a dose reduction in 25 patients (58 percent). Other adverse events were mild and infrequent. Twenty-four patients had a response (56 percent): 20 had sustained independence from transfusion, 1 had an increase in the hemoglobin level of more than 2 g per deciliter, and 3 had more than a 50 percent reduction in the need for transfusions. The response rate was highest among patients with a clonal interstitial deletion involving chromosome 5q31.1 (83 percent, as compared with 57 percent among those with a normal karyotype and 12 percent among those with other karyotypic abnormalities; P=0.007) and patients with lower prognostic risk. Of 20 patients with karyotypic abnormalities, 11 had at least a 50 percent reduction in abnormal cells in metaphase, including 10 (50 percent) with a complete cytogenetic remission. After a median followup of 81 weeks, the median duration of transfusion independence had not been reached and the median hemoglobin level was 13.2 g per deciliter (range, 11.5 to 15.8). conclusions Lenalidomide has hematologic activity in patients with low-risk myelodysplastic syndromes who have no response to erythropoietin or who are unlikely to benefit from conventional therapy.