scispace - formally typeset
Search or ask a question

Showing papers by "University of Colorado Boulder published in 2000"


Journal ArticleDOI
TL;DR: The results suggest that it is important to recognize both the unity and diversity ofExecutive functions and that latent variable analysis is a useful approach to studying the organization and roles of executive functions.

12,182 citations


Journal ArticleDOI
25 May 2000-Nature
TL;DR: Direct electrical stimulation of the peripheral vagus nerve in vivo during lethal endotoxaemia in rats inhibited TNF synthesis in liver, attenuated peak serum TNF amounts, and prevented the development of shock.
Abstract: Vertebrates achieve internal homeostasis during infection or injury by balancing the activities of proinflammatory and anti-inflammatory pathways. Endotoxin (lipopolysaccharide), produced by all gram-negative bacteria, activates macrophages to release cytokines that are potentially lethal. The central nervous system regulates systemic inflammatory responses to endotoxin through humoral mechanisms. Activation of afferent vagus nerve fibres by endotoxin or cytokines stimulates hypothalamic-pituitary-adrenal anti-inflammatory responses. However, comparatively little is known about the role of efferent vagus nerve signalling in modulating inflammation. Here, we describe a previously unrecognized, parasympathetic anti-inflammatory pathway by which the brain modulates systemic inflammatory responses to endotoxin. Acetylcholine, the principle vagal neurotransmitter, significantly attenuated the release of cytokines (tumour necrosis factor (TNF), interleukin (IL)-1beta, IL-6 and IL-18), but not the anti-inflammatory cytokine IL-10, in lipopolysaccharide-stimulated human macrophage cultures. Direct electrical stimulation of the peripheral vagus nerve in vivo during lethal endotoxaemia in rats inhibited TNF synthesis in liver, attenuated peak serum TNF amounts, and prevented the development of shock.

3,404 citations


Journal ArticleDOI
TL;DR: The authors argue that the shifts in world view that these discussions represent are even more fundamental than the now-historical shift from behaviorist to cognitive views of learning (Shuell, 1986).
Abstract: The education and research communities are abuzz with new (or at least re-discovered) ideas about the nature of cognition and learning. Terms like \"situated cognition,\" \"distributed cognition,\" and \"communities of practice\" fill the air. Recent dialogue in Educational Researcher (Anderson, Reder, & Simon, 1996,1997; Greeno, 1997) typifies this discussion. Some have argued that the shifts in world view that these discussions represent are even more fundamental than the now-historical shift from behaviorist to cognitive views of learning (Shuell, 1986). These new ideas about the nature of knowledge, thinking, and learning—which are becoming known as the \"situative perspective\" (Greeno, 1997; Greeno, Collins, & Resnick, 1996)—are interacting with, and sometimes fueling, current reform movements in education. Most discussions of these ideas and their implications for educational practice have been cast primarily in terms of students. Scholars and policymakers have considered, for example, how to help students develop deep understandings of subject matter, situate students' learning in meaningful contexts, and create learning communities in which teachers and students engage in rich discourse about important ideas (e.g., National Council of Teachers of Mathematics, 1989; National Education Goals Panel, 1991; National Research Council, 1993).

3,353 citations


Journal ArticleDOI
TL;DR: In this paper, an improved model for the absorption of X-rays in the interstellar medium (ISM) is presented for use with data from future X-ray missions with larger effective areas and increased energy resolution such as Chandra and the X-Ray Multiple Mirror mission.
Abstract: We present an improved model for the absorption of X-rays in the interstellar medium (ISM) intended for use with data from future X-ray missions with larger effective areas and increased energy resolution such as Chandra and the X-Ray Multiple Mirror mission, in the energy range 100 eV. Compared with previous work, our formalism includes recent updates to the photoionization cross section and revised abundances of the interstellar medium, as well as a treatment of interstellar grains and the H2 molecule. We review the theoretical and observational motivations behind these updates and provide a subroutine for the X-ray spectral analysis program XSPEC that incorporates our model.

3,239 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an historical framework highlighting the key tenets of social efficiency curricula, behaviorist learning theories, and scientific measurement, and offer a contrasting social constructivist conceptual framework that blends key ideas from cognitive, constructivist, and sociocultural theories.
Abstract: of assessments used to give grades or to satisfy the accountability demands of an external authority, but rather the kind of assessment that can be used as a part of instruction to support and enhance learning. On this topic, I am especially interested in engaging the very large number of educational researchers who participate, in one way or another, in teacher education. The transformation of assessment practices cannot be accomplished in separate tests and measurement courses, but rather should be a central concern in teaching methods courses. The article is organized in three parts. I present, first, an historical framework highlighting the key tenets of social efficiency curricula, behaviorist learning theories, and "scientific measurement." Next, I offer a contrasting socialconstructivist conceptual framework that blends key ideas from cognitive, constructivist, and sociocultural theories. In the third part, I elaborate on the ways that assessment practices should change to be consistent with and support socialconstructivist pedagogy. The impetus for my development of an historical framework was the observation by Beth Graue (1993) that "assessment and instruction are often conceived as curiously separate in both time and purpose" (p. 291, emphasis added). As Graue notes, the measurement approach to classroom assessment, "exemplified by standardized tests and teacher-made emulations of those tests," presents a barrier to the implementation of more constructivist approaches to instruction. To understand the origins of Graue's picture of separation and to help explain its continuing power over presentday practice, I drew the chronology in Figure 1. A longerterm span of history helps us see that those measurement perspectives, now felt to be incompatible with instruction, came from an earlier, highly consistent theoretical framework (on the left) in which conceptions of "scientific measurement" were closely aligned with traditional curricula and beliefs about learning. To the right is an emergent, constructivist paradigm in which teachers' close assessment of students' understandings, feedback from peers, and student self-assessments would be a central part of the social processes that mediate the development of intellectual abilities, construction of knowledge, and formation of students' identities. The best way to understand dissonant current practices, shown in the middle of the figure, is to realize that instruction (at least in its ideal form) is drawn from the emergent paradigm, while testing is held over from the past. Historical Perspectives: Curriculum, Psychology, and Measurement

2,107 citations


Journal ArticleDOI
TL;DR: This paper focuses on the primal version of the new algorithm, an algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints, which applies sequential quadratic programming techniques to a sequence of barrier problems.
Abstract: An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented.

1,514 citations



Journal ArticleDOI
08 Sep 2000-Science
TL;DR: Interannual variability in both freeze and breakup dates has increased since 1950 and a few longer time series reveal reduced ice cover (a warming trend) beginning as early as the 16th century, with increasing rates of change after about 1850.
Abstract: Freeze and breakup dates of ice on lakes and rivers provide consistent evidence of later freezing and earlier breakup around the Northern Hemisphere from 1846 to 1995. Over these 150 years, changes in freeze dates averaged 5.8 days per 100 years later, and changes in breakup dates averaged 6.5 days per 100 years earlier; these translate to increasing air temperatures of about 1.2°C per 100 years. Interannual variability in both freeze and breakup dates has increased since 1950. A few longer time series reveal reduced ice cover (a warming trend) beginning as early as the 16th century, with increasing rates of change after about 1850.

1,214 citations


Journal ArticleDOI
TL;DR: The data support the view that late positivity to affective pictures is modulated both by their intrinsic motivational significance and the evaluative context of picture presentation.
Abstract: Recent studies have shown that the late positive component of the event-related-potential (ERP) is enhanced for emotional pictures, presented in an oddball paradigm, evaluated as distant from an established affective context. In other research, with context-free, random presentation, affectively intense pictures (pleasant and unpleasant) prompted similar enhanced ERP late positivity (compared with the neutral picture response). In an effort to reconcile interpretations of the late positive potential (LPP), ERPs to randomly ordered pictures were assessed, but using the faster presentation rate, brief exposure (1.5 s), and distinct sequences of six pictures, as in studies using an oddball based on evaluative distance. Again, results showed larger LPPs to pleasant and unpleasant pictures, compared with neutral pictures. Furthermore, affective pictures of high arousal elicited larger LPPs than less affectively intense pictures. The data support the view that late positivity to affective pictures is modulated both by their intrinsic motivational significance and the evaluative context of picture presentation.

1,208 citations


Journal ArticleDOI
12 May 2000-Science
TL;DR: An opposite mechanism through which aerosols can reduce cloud cover and thus significantly offset aerosol-induced radiative cooling at the top of the atmosphere on a regional scale is demonstrated.
Abstract: Measurements and models show that enhanced aerosol concentrations can augment cloud albedo not only by increasing total droplet cross-sectional area, but also by reducing precipitation and thereby increasing cloud water content and cloud coverage. Aerosol pollution is expected to exert a net cooling influence on the global climate through these conventional mechanisms. Here, we demonstrate an opposite mechanism through which aerosols can reduce cloud cover and thus significantly offset aerosol-induced radiative cooling at the top of the atmosphere on a regional scale. In model simulations, the daytime clearing of trade cumulus is hastened and intensified by solar heating in dark haze (as found over much of the northern Indian Ocean during the northeast monsoon).

1,206 citations


Journal ArticleDOI
TL;DR: Regular aerobic-endurance exercise attenuates age-related reductions in central arterial compliance and restores levels in previously sedentary healthy middle-aged and older men.
Abstract: Background—A reduction in compliance of the large-sized cardiothoracic (central) arteries is an independent risk factor for the development of cardiovascular disease with advancing age. Methods and...

Journal ArticleDOI
TL;DR: The authors proposed a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as STATEMENT, QUESTION, BACKCHANNEL, AGREEMENT, DISAGREEMENT and APOLOGY.
Abstract: We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as STATEMENT, QUESTION, BACKCHANNEL, AGREEMENT, DISAGREEMENT, and APOLOGY. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65% based on errorful, automatically recognized words and prosody, and 71% based on word transcripts, compared to a chance baseline accuracy of 35% and human accuracy of 84%) and a small reduction in word recognition error.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of gyrokinetic and gyrofluid simulations of ion-temperature gradient (ITG)instability and turbulence in tokamak plasmas as well as some tokak plasma thermal transportmodels.
Abstract: The predictions of gyrokinetic and gyrofluid simulations of ion-temperature-gradient(ITG)instability and turbulence in tokamak plasmas as well as some tokamak plasma thermal transportmodels, which have been widely used for predicting the performance of the proposed International Thermonuclear Experimental Reactor (ITER) tokamak [Plasma Physics and Controlled Nuclear Fusion Research, 1996 (International Atomic Energy Agency, Vienna, 1997), Vol. 1, p. 3], are compared. These comparisons provide information on effects of differences in the physics content of the various models and on the fusion-relevant figures of merit of plasma performance predicted by the models. Many of the comparisons are undertaken for a simplified plasma model and geometry which is an idealization of the plasma conditions and geometry in a Doublet III-D [Plasma Physics and Controlled Nuclear Fusion Research, 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. 1, p. 159] high confinement (H-mode) experiment. Most of the models show good agreements in their predictions and assumptions for the linear growth rates and frequencies. There are some differences associated with different equilibria. However, there are significant differences in the transport levels between the models. The causes of some of the differences are examined in some detail, with particular attention to numerical convergence in the turbulence simulations (with respect to simulation mesh size, system size and, for particle-based simulations, the particle number). The implications for predictions of fusion plasma performance are also discussed.

Proceedings ArticleDOI
03 Oct 2000
TL;DR: This work presents a system for identifying the semantic relationships, or semantic roles, filled by constituents of a sentence within a semantic frame, derived from parse trees and hand-annotated training data.
Abstract: We present a system for identifying the semantic relationships, or semantic roles, filled by constituents of a sentence within a semantic frame. Various lexical and syntactic features are derived from parse trees and used to derive statistical classifiers from hand-annotated training data.

Journal ArticleDOI
TL;DR: In this paper, an experimental and numerical study of forced convection in high porosity (e∼0.89-0.97) metal foams was conducted using air as the fluid medium.
Abstract: We report an experimental and numerical study of forced convection in high porosity (e∼0.89-0.97) metal foams. Experiments have been conducted with aluminum metal foams in a variety of porosities and pore densities using air as the fluid medium. Nusselt number data has been obtained as a function of the pore Reynolds number. In the numerical study, a semi-empirical volume-averaged form of the governing equations is used. The velocity profile is obtained by adapting an exact solution to the momentum equation. The energy transport is modeled without invoking the assumption of local thermal equilibrium. Models for the thermal dispersion conductivity, k d , and the interstitial heat transfer coefficient, h sf , are postulated based on physical arguments. The empirical constants in these models are determined by matching the numerical results with the experimental data obtained in this study as well as those in the open literature. Excellent agreement is achieved in the entire range of the parameters studied, indicating that the proposed treatment is sufficient to model forced convection in metal foams for most practical applications

Journal ArticleDOI
TL;DR: It is indicated that regular aerobic exercise can prevent the age-associated loss in endothelium-dependent vasodilation and restore levels in previously sedentary middle aged and older healthy men.
Abstract: Background—In sedentary humans endothelium-dependent vasodilation is impaired with advancing age contributing to their increased cardiovascular risk, whereas endurance-trained adults demonstrate lower age-related risk. We determined the influence of regular aerobic exercise on the age-related decline in endothelium-dependent vasodilation. Methods and Results—In a cross-sectional study, 68 healthy men 22 to 35 or 50 to 76 years of age who were either sedentary or endurance exercise–trained were studied. Forearm blood flow (FBF) responses to intra-arterial infusions of acetylcholine and sodium nitroprusside were measured by strain-gauge plethysmography. Among the sedentary men, the maximum FBF response to acetylcholine was 25% lower in the middle aged and older compared with the young group (P<0.01). In contrast, there was no age-related difference in the vasodilatory response to acetylcholine among the endurance-trained men. FBF at the highest acetylcholine dose was almost identical in the middle aged and ...

Book
01 Jan 2000
TL;DR: In this paper, the role of content standards, dual goals of high performance standards and common standards for all students, and the validity of accountability models are discussed. But, questions regarding the impact, validity, and generalizability of reported gains and the credibility of results in high-stakes accountability uses are discussed, and suggestions for dealing with the most severe limitations of accountability are provided.
Abstract: Use of tests and assessments as key elements in five waves of educational reform during the past 50 years are reviewed. These waves include the role of tests in tracking and selection emphasized in the 1950s, the use of tests for program accountability in the 1960s, minimum competency testing programs of the 1970s, school and district accountability of the 1980s, and the standards-based accountability systems of the 1990s. Questions regarding the impact, validity, and generalizability of reported gains, and the credibility of results in high-stakes accountability uses are discussed. Emphasis is given to three issues regarding currently popular accountability systems. These are (a) the role of content standards, (b) the dual goals of high performance standards and common standards for all students, and (c) the validity of accountability models. Some suggestions for dealing with the most severe limitations of accountability are provided.

Journal ArticleDOI
TL;DR: A theoretical model that describes the power of a scattered GPS signal as a function of geometrical and environmental parameters has been developed, suggesting mapping of the wave-slope probability distribution in a synthetic-aperture-radar (SAR) fashion to allow more accurate measurements of wind velocity and wind direction.
Abstract: A theoretical model that describes the power of a scattered Global Positioning System (GPS) signal as a function of geometrical and environmental parameters has been developed. This model is based on a bistatic radar equation derived using the geometric optics limit of the Kirchhoff approximation. The waveform (i.e., the time-delayed power obtained in the delay-mapping technique) depends on a wave-slope probability density function, which in turn depends on wind. Waveforms obtained for aircraft altitudes and velocities indicate that altitudes within the interval 5-15 km are the best for inferring wind speed. In some regimes, an analytical solution for the bistatic radar equation is possible. This solution allows converting trailing edges of waveforms into a set of straight lines, which could be convenient for wind retrieval. A transition to satellite altitudes, together with satellite velocities, makes the peak power reduction and the Doppler spreading effect a significant problem for wind retrieval based on the delay-mapping technique. At the same time, different time delays and different Doppler shifts of the scattered GPS signal could form relatively small spatial cells on sea surface, suggesting mapping of the wave-slope probability distribution in a synthetic-aperture-radar (SAR) fashion. This may allow more accurate measurements of wind velocity and wind direction.

Journal ArticleDOI
TL;DR: In this article, the effect of photoionization on the gas fraction in low-mass objects is quantified using simulations of cosmological reionization to quantify the characteristic mass scale below which a gas fraction is reduced compared to the universal value.
Abstract: I use simulations of cosmological reionization to quantify the effect of photoionization on the gas fraction in low-mass objects, in particular the characteristic mass scale below which the gas fraction is reduced compared to the universal value. I show that this characteristic scale can be up to an order of magnitude lower than the linear-theory Jeans mass, and that even if one defines the Jeans mass at a higher overdensity, it does not track the evolution of this characteristic suppression mass. Instead, the filtering mass, which corresponds directly to the scale over which baryonic perturbations are smoothed in linear perturbation theory, provides a remarkably good fit to the characteristic mass scale. Thus, it appears that the effect of reionization on structure formation in both the linear and nonlinear regimes is described by a single characteristic scale, the filtering scale of baryonic perturbations. In contrast to the Jeans mass, the filtering mass depends on the full thermal history of the gas instead of the instantaneous value of the sound speed, so it accounts for the finite time required for pressure to influence the gas distribution in the expanding universe. In addition to the characteristic suppression mass, I study the full shape of the probability distribution to find an object with a given gas mass among all the objects with the same total mass, and I show that the numerical results can be described by a simple fitting formula that again depends only on the filtering mass. This simple description of the probability distribution may be useful for semianalytical modeling of structure formation in the early universe.

Journal ArticleDOI
TL;DR: The Far Ultraviolet Spectroscopic Explorer (FUSSE Explorer) satellite observes light in the far-ultraviolet spectral region, 905-1187 Angstrom, with a high spectral resolution as discussed by the authors.
Abstract: The Far Ultraviolet Spectroscopic Explorer satellite observes light in the far-ultraviolet spectral region, 905-1187 Angstrom, with a high spectral resolution The instrument consists of four co-aligned prime-focus telescopes and Rowland spectrographs with microchannel plate detectors Two of the telescope channels use Al :LiF coatings for optimum reflectivity between approximately 1000 and 1187 Angstrom, and the other two channels use SiC coatings for optimized throughput between 905 and 1105 Angstrom The gratings are holographically ruled to correct largely for astigmatism and to minimize scattered light The microchannel plate detectors have KBr photocathodes and use photon counting to achieve good quantum efficiency with low background signal The sensitivity is sufficient to examine reddened lines of sight within the Milky Way and also sufficient to use as active galactic nuclei and QSOs for absorption-line studies of both Milky Way and extragalactic gas clouds This spectral region contains a number of key scientific diagnostics, including O VI, H I, D I, and the strong electronic transitions of H-2 and HD

Journal ArticleDOI
TL;DR: In this article, the authors reviewed the magnitudes, distributions, controlling processes and uncertainties associated with North American natural emissions of oxidant precursors, including non-methane volatile organic compounds (NMVOC), carbon monoxide (CO) and nitric oxide (NO), that determine tropospheric oxidant concentrations.

Journal ArticleDOI
TL;DR: The strong Novikov conjecture on the homotopy invariance of higher signatures was shown to hold for finite CW complex groups in this paper, i.e., the index map from K ∗(BΓ) to K∗(C∗ r (Γ)) is injective.
Abstract: Corollary 1.2. Let Γ be a finitely generated group. If Γ, as a metric space with a word-length metric, admits a uniform embedding into Hilbert space, and its classifying space BΓ has the homotopy type of a finite CW complex, then the strong Novikov conjecture holds for Γ, i.e. the index map from K∗(BΓ) to K∗(C∗ r (Γ)) is injective. Corollary 1.2 follows from Theorem 1.1 and the descent principle [23]. By index theory, the strong Novikov conjecture implies the Novikov conjecture on the homotopy invariance of higher signatures (cf. [8] for an excellent

Journal ArticleDOI
TL;DR: In this article, a review of in situ and remote-sensing data covering the ice shelves of the Antarctic Peninsula provides a series of characteristics closely associated with rapid shelf retreat: deeply embayed ice fronts, calving of myriad small elongate bergs in punctuated events, increasing flow speed, and the presence of melt ponds on the ice-shelf surface in the vicinity of the breakups.
Abstract: A review of in situ and remote-sensing data covering the ice shelves of the Antarctic Peninsula provides a series of characteristics closely associated with rapid shelf retreat: deeply embayed ice fronts; calving of myriad small elongate bergs in punctuated events; increasing flow speed; and the presence of melt ponds on the ice-shelf surface in the vicinity of the break-ups. As climate has warmed in the Antarctic Peninsula region, melt-season duration and the extent of ponding have increased. Most break-up events have occurred during longer melt seasons, suggesting that meltwater itself, not just warming, is responsible. Regions that show melting without pond formation are relatively unchanged. Melt ponds thus appear to be a robust harbinger of ice-shelf retreat. We use these observations to guide a model of ice-shelf flow and the effects of meltwater. Crevasses present in a region of surface ponding will likely fill to the brim with water. We hypothesize (building on Weertman (1973), Hughes (1983) and Van der Veen (1998)) that crevasse propagation by meltwater is the main mechanism by which ice shelves weaken and retreat. A thermodynamic finite-element model is used to evaluate ice flow and the strain field, and simple extensions of this model are used to investigate crack propagation by meltwater. The model results support the hypothesis.

Journal ArticleDOI
TL;DR: Aptamers are oligonucleotides derived from an in vitro evolution process called SELEX that inhibit physiological functions known to be associated with their target proteins, and a new approach to diagnostics is also described.

Journal ArticleDOI
TL;DR: In this paper, the Tully-fisher relation, the fundamental plane of elliptical galaxies, Type Ia supernovae, and surface brightness fluctuations are combined with a model of the velocity field to yield the best available estimate of the value of H0 within the range of these secondary distance indicators and its uncertainty.
Abstract: Since the launch of Hubble Space Telescope (HST) 9 yr ago, Cepheid distances to 25 galaxies have been determined for the purpose of calibrating secondary distance indicators. Eighteen of these have been measured by the HST Key Project team, six by the Supernova Calibration Project, and one independently by Tanvir. Collectively, this work sets out an array of survey markers over the region within 25 Mpc of the Milky Way. A variety of secondary distance indicators can now be calibrated, and the accompanying four papers employ the full set of 25 galaxies to consider the Tully-Fisher relation, the fundamental plane of elliptical galaxies, Type Ia supernovae, and surface brightness fluctuations. When calibrated with Cepheid distances, each of these methods yields a measurement of the Hubble constant and a corresponding measurement uncertainty. We combine these measurements in this paper, together with a model of the velocity field, to yield the best available estimate of the value of H0 within the range of these secondary distance indicators and its uncertainty. The uncertainty in the result is modeled in an extensive simulation we call the "virtual Key Project." The velocity-field model includes the influence of the Virgo cluster, the Great Attractor, and the Shapley supercluster, but does not play a significant part in determining the result. The result is H0 = 71 ± 6 km s-1 Mpc-1. The largest contributor to the uncertainty of this 67% confidence level result is the distance of the Large Magellanic Cloud, which has been assumed to be 50 ± 3 kpc. This takes up the first 6.5% of our 9% error budget. Other contributors are the photometric calibration of the WFPC2 instrument, which takes up 4.5%, deviations from uniform Hubble flow in the volume sampled (2%), the composition sensitivity of the Cepheid period-luminosity relation (4%), and departures from a universal reddening law (~1%). These are the major components that , when combined in quadrature, make up the 9% total uncertainty. If the LMC distance modulus were systematically smaller by 1 σ than that adopted here, the derived value of the Hubble constant would increase by 4 km s-1 Mpc-1. Most of the significant systematic errors are capable of amelioration in future work. These include the uncertainty in the photometric calibration of WFPC2, the LMC distance, and the reddening correction. A NICMOS study is in its preliminary reduction phase, addressing the last of these concerns. Various empirical analyses have suggested that Cepheid distance moduli are affected by metallicity differences. If we adopted the composition sensitivity obtained in the Key Project's study of M101, and employed the oxygen abundances measured spectroscopically in each of the Cepheid fields we have studied, the value of the Hubble constant would be reduced by 4% ± 2% to 68 ± 6 km s-1 Mpc-1.

Journal ArticleDOI
TL;DR: In this paper, the authors compared different coefficients of determination for continuous predicted values (R 2 analogs) in logistic regression for their conceptual and mathematical similarity to the familiar R 2 statistic from ordinary least squares regression.
Abstract: Coefficients of determination for continuous predicted values (R 2 analogs) in logistic regression are examined for their conceptual and mathematical similarity to the familiar R 2 statistic from ordinary least squares regression, and compared to coefficients of determination for discrete predicted values (indexes of predictive efficiency). An example motivated by substantive concerns and using empirical data from a national household probability sample is presented to illustrate the behavior of the different coefficients of determination in the evaluation of models including dependent variables with different base rates—that is, different proportions of cases or observations with “positive” outcomes. One R 2 analog appears to be preferable to the others both in terms of conceptual similarity to the ordinary least squares coefficient of determination, and in terms of its relative independence from the base rate. In addition, base rate should also be considered when selecting an index of predictiv...

Journal ArticleDOI
02 Nov 2000-Nature
TL;DR: Using free-air CO2 enrichment (FACE) technology in an intact Mojave Desert ecosystem, it is shown that new shoot production of a dominant perennial shrub is doubled by a 50% increase in atmospheric CO2 concentration in a high rainfall year, but elevated CO 2 does not enhance production in a drought year.
Abstract: Arid ecosystems, which occupy about 20% of the earth's terrestrial surface area, have been predicted to be one of the most responsive ecosystem types to elevated atmospheric CO2 and associated global climate change1,2,3. Here we show, using free-air CO2 enrichment (FACE) technology in an intact Mojave Desert ecosystem4, that new shoot production of a dominant perennial shrub is doubled by a 50% increase in atmospheric CO2 concentration in a high rainfall year. However, elevated CO2 does not enhance production in a drought year. We also found that above-ground production and seed rain of an invasive annual grass increases more at elevated CO2 than in several species of native annuals. Consequently, elevated CO2 might enhance the long-term success and dominance of exotic annual grasses in the region. This shift in species composition in favour of exotic annual grasses, driven by global change, has the potential to accelerate the fire cycle, reduce biodiversity and alter ecosystem function in the deserts of western North America.

Journal ArticleDOI
TL;DR: The dominant mode of winter (January-March) sea ice variability exhibits out-of-phase fluctuations between the western and eastern North Atlantic, together with a weaker dipole in the North Pacific as mentioned in this paper.
Abstract: Forty years (1958–97) of reanalysis products and corresponding sea ice concentration data are used to document Arctic sea ice variability and its association with surface air temperature (SAT) and sea level pressure (SLP) throughout the Northern Hemisphere extratropics. The dominant mode of winter (January–March) sea ice variability exhibits out-of-phase fluctuations between the western and eastern North Atlantic, together with a weaker dipole in the North Pacific. The time series of this mode has a high winter-to-winter autocorrelation (0.69) and is dominated by decadal-scale variations and a longer-term trend of diminishing ice cover east of Greenland and increasing ice cover west of Greenland. Associated with the dominant pattern of winter sea ice variability are large-scale changes in SAT and SLP that closely resemble the North Atlantic oscillation. The associated SAT and surface sensible and latent heat flux anomalies are largest over the portions of the marginal sea ice zone in which the tr...

Journal ArticleDOI
TL;DR: Findings demonstrate significant alterations in NGF, BDNF, and NT-3 protein levels in several brain regions as a result of an enriched versus an isolated environment and thus provide a possible biochemical basis for behavioral and morphological alterations that have been found to occur with a shifting environmental stimulus.

Journal ArticleDOI
TL;DR: In vitro analysis of bovine and ovine chondrocytes encapsulated in a poly( methylene oxide)-dimethacrylate and poly(ethylene glycol) semi-interpenetrating network using a photopolymerization process suggests the feasibility of photoencapsulation for tissue engineering and drug delivery purposes.
Abstract: A photopolymerizing hydrogel system provides an efficient method to encapsulate cells. The present work describes the in vitro analysis of bovine and ovine chondrocytes encapsulated in a poly(ethylene oxide)-dimethacrylate and poly(ethylene glycol) semi-interpenetrating network using a photopolymerization process. One day after encapsulation, (3-[4,5-dimethylthiazol-2-y1]-2, 5-diphenyl-2H-tetrazolium bromide) (MTT) and light microscopy showed chondrocyte survival and a dispersed cell population composed of ovoid and elongated cells. Biochemical analysis demonstrated proteoglycan and collagen contents that increased over 2 weeks of static incubation. Cell content of the gels initially decreased and stabilized. Biomechanical analysis demonstrated the presence of a functional extracellular matrix with equilibrium moduli, dynamic stiffness, and streaming potentials that increased with time. These findings suggest the feasibility of photoencapsulation for tissue engineering and drug delivery purposes.