scispace - formally typeset
Search or ask a question

Showing papers by "University of Colorado Boulder published in 2001"


Journal ArticleDOI
TL;DR: The findings suggest that the currently used equation for predicting maximal heart rate in older adults underestimates HRmax, which would have the effect of underestimating the true level of physical stress imposed during exercise testing and the appropriate intensity of prescribed exercise programs.

2,924 citations


Journal ArticleDOI
TL;DR: SIENA, an event notification service that is designed and implemented to exhibit both expressiveness and scalability, is presented and the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance are described.
Abstract: The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i. e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service's clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance. We also present results of simulation studies that examine the scalability and performance of the service.

1,568 citations


Journal ArticleDOI
01 Jun 2001-Appetite
TL;DR: Four of the seven factors measuring parental beliefs related to child's obesity proneness were related to an independent measure of children's weight status, providing initial support for the validity of the instrument.

1,480 citations


Journal ArticleDOI
TL;DR: In this article, the authors conclude that over half of accessible fresh runoff globally is already appropriated for human use, and that more than 1 × 109 people currently lack access to clean drinking water and almost 3 ×109 people lack basic sanitation services, and because the human population will grow faster than increases in the amount of available fresh water, per capita availability of fresh water will decrease in the coming century.
Abstract: Renewable fresh water comprises a tiny fraction of the global water pool but is the foundation for life in terrestrial and freshwater ecosystems. The benefits to humans of renewable fresh water include water for drinking, irrigation, and industrial uses, for production of fish and waterfowl, and for such instream uses as recreation, transportation, and waste disposal. In the coming century, climate change and a growing imbalance among freshwater supply, consumption, and population will alter the water cycle dramatically. Many regions of the world are already limited by the amount and quality of available water. In the next 30 yr alone, accessible runoff is unlikely to increase more than 10%, but the earth's population is projected to rise by approximately one-third. Unless the efficiency of water use rises, this imbalance will reduce freshwater ecosystem services, increase the number of aquatic species facing extinction, and further fragment wetlands, rivers, deltas, and estuaries. Based on the scientific evidence currently available, we conclude that: (1) over half of accessible freshwater runoff globally is already appropriated for human use; (2) more than 1 × 109 people currently lack access to clean drinking water and almost 3 × 109 people lack basic sanitation services; (3) because the human population will grow faster than increases in the amount of accessible fresh water, per capita availability of fresh water will decrease in the coming century; (4) climate change will cause a general intensification of the earth's hydrological cycle in the next 100 yr, with generally increased precipitation, evapotranspiration, and occurrence of storms, and significant changes in biogeochemical processes influencing water quality; (5) at least 90% of total water discharge from U.S. rivers is strongly affected by channel fragmentation from dams, reservoirs, interbasin diversions, and irrigation; and (6) globally, 20% of freshwater fish species are threatened or extinct, and freshwater species make up 47% of all animals federally endangered in the United States. The growing demands on freshwater resources create an urgent need to link research with improved water management. Better monitoring, assessment, and forecasting of water resources will help to allocate water more efficiently among competing needs. Currently in the United States, at least six federal departments and 20 agencies share responsibilities for various aspects of the hydrologic cycle. Coordination by a single panel with members drawn from each department, or by a central agency, would acknowledge the diverse pressures on freshwater systems and could lead to the development of a well-coordinated national plan.

1,184 citations


Journal ArticleDOI
TL;DR: In this article, a new record of Holocene isotope variations obtained from the NorthGRIP ice-core matches the GRIP short-term isotope record, and also shows similar long-term trends to the Dye-3 and GRIP inverted temperature data.
Abstract: Oxygen isotope variations spanning the last glacial cycle and the Holocene derived from ice-core records for six sites in Greenland (Camp Century, Dye-3, GRIP, GISP2, Renland and NorthGRIP) show strong similarities. This suggests that the dominant influence on oxygen isotope variations reflected in the ice-sheet records was regional climatic change. Differences in detail between the records probably reflect the effects of basal deformation in the ice as well as geographical gradients in atmospheric isotope ratios. Palaeotemperature estimates have been obtained from the records using three approaches: (i) inferences based on the measured relationship between mean annual δ18O of snow and of mean annual surface temperature over Greenland; (ii) modelled inversion of the borehole temperature profile constrained either by the dated isotopic profile, or (iii) by using Monte Carlo simulation techniques. The third of these approaches was adopted to reconstruct Holocene temperature variations for the Dye 3 and GRIP temperature profiles, which yields remarkably compatible results. A new record of Holocene isotope variations obtained from the NorthGRIP ice-core matches the GRIP short-term isotope record, and also shows similar long-term trends to the Dye-3 and GRIP inverted temperature data. The NorthGRIP isotope record reflects: (i) a generally stronger isotopic signal than is found in the GRIP record; (ii) several short-lived temperature fluctuations during the first 1500 yr of the Holocene; (iii) a marked cold event at ca. 8.2 ka (the ‘8.2 ka event’); (iv) optimum temperatures for the Holocene between ca. 8.6 and 4.3 ka, a signal that is 0.6‰ stronger than for the GRIP profile; (v) a clear signal for the Little Ice Age; and (vi) a clear signal of climate warming during the last century. These data suggest that the NorthGRIP stable isotope record responded in a sensitive manner to temperature fluctuations during the Holocene. Copyright © 2001 John Wiley & Sons, Ltd.

1,041 citations


Journal ArticleDOI
TL;DR: Taken together, these findings suggest a new, dramatically different approach to pain control, as all clinical therapies are focused exclusively on altering neuronal, rather than glial, function.

1,028 citations


Journal ArticleDOI
19 Oct 2001-Science
TL;DR: Global Positioning System (GPS) measurements in China indicate that crustal shortening accommodates most of India's penetration into Eurasia, but the Tibetan plateau south of the Kunlun and Ganzi-Mani faults is moving eastward relative to both India and Eurasia.
Abstract: Global Positioning System (GPS) measurements in China indicate that crustal shortening accommodates most of India's penetration into Eurasia. Deformation within the Tibetan Plateau and its margins, the Himalaya, the Altyn Tagh, and the Qilian Shan, absorbs more than 90% of the relative motion between the Indian and Eurasian plates. Internal shortening of the Tibetan plateau itself accounts for more than one-third of the total convergence. However, the Tibetan plateau south of the Kunlun and Ganzi-Mani faults is moving eastward relative to both India and Eurasia. This movement is accommodated through rotation of material around the eastern Syntaxis. The North China and South China blocks, east of the Tibetan Plateau, move coherently east-southeastward at rates of 2 to 8 millimeters per year and 6 to 11 millimeters per year, respectively, with respect to the stable Eurasia.

1,019 citations


Journal ArticleDOI
TL;DR: This paper examined the relationship among visuospatial working memory (WM) executive functioning and spatial abilities and found that WM tasks equally implicate executive functioning, and are not clearly distinguishable.
Abstract: This study examined the relationships among visuospatial working memory (WM) executive functioning, and spatial abilities One hundred sixty-seven participants performed visuospatial short-term memory (STM) and WM span tasks, executive functioning tasks, and a set of paper-and-pencil tests of spatial abilities that load on 3 correlated but distinguishable factors (Spatial Visualization, Spatial Relations, and Perceptual Speed) Confirmatory factor analysis results indicated that, in the visuospatial domain, processing-and-storage WM tasks and storage-oriented STM tasks equally implicate executive functioning and are not clearly distinguishable These results provide a contrast with existing evidence from the verbal domain and support the proposal that the visuospatial sketchpad may be closely tied to the central executive Further, structural equation modeling results supported the prediction that, whereas they all implicate some degree of visuospatial storage, the 3 spatial ability factors differ in the degree of executive involvement (highest for Spatial Visualization and lowest for Perceptual Speed) Such results highlight the usefulness of a WM perspective in characterizing the nature of cognitive abilities and, more generally, human intelligence

996 citations


Journal ArticleDOI
Abstract: The Thermal Emission Spectrometer (TES) investigation on Mars Global Surveyor (MGS) is aimed at determining (1) the composition of surface minerals, rocks, and ices; (2) the temperature and dynamics of the atmosphere; (3) the properties of the atmospheric aerosols and clouds; (4) the nature of the polar regions; and (5) the thermophysical properties of the surface materials. These objectives are met using an infrared (5.8- to 50-μm) interferometric spectrometer, along with broadband thermal (5.1- to 150-μm) and visible/near-IR (0.3- to 2.9-μm) radiometers. The MGS TES instrument weighs 14.47 kg, consumes 10.6 W when operating, and is 23.6×35.5×40.0 cm in size. The TES data are calibrated to a 1-σ precision of 2.5−6×10−8 W cm−2 sr−1/cm−1, 1.6×10−6 W cm−2 sr−1, and ∼0.5 K in the spectrometer, visible/near-IR bolometer, and IR bolometer, respectively. These instrument subsections are calibrated to an absolute accuracy of ∼4×10−8 W cm−2 sr−1/cm−1 (0.5 K at 280 K), 1–2%, and ∼1–2 K, respectively. Global mapping of surface mineralogy at a spatial resolution of 3 km has shown the following: (1) The mineralogic composition of dark regions varies from basaltic, primarily plagioclase feldspar and clinopyroxene, in the ancient, southern highlands to andesitic, dominated by plagioclase feldspar and volcanic glass, in the younger northern plains. (2) Aqueous mineralization has produced gray, crystalline hematite in limited regions under ambient or hydrothermal conditions; these deposits are interpreted to be in-place sedimentary rock formations and indicate that liquid water was stable near the surface for a long period of time. (3) There is no evidence for large-scale (tens of kilometers) occurrences of moderate-grained (>50-μm) carbonates exposed at the surface at a detection limit of ∼10%. (4) Unweathered volcanic minerals dominate the spectral properties of dark regions, and weathering products, such as clays, have not been observed anywhere above a detection limit of ∼10%; this lack of evidence for chemical weathering indicates a geologic history dominated by a cold, dry climate in which mechanical, rather than chemical, weathering was the significant form of erosion and sediment production. (5) There is no conclusive evidence for sulfate minerals at a detection limit of ∼15%. The polar region has been studied with the following major conclusions: (1) Condensed CO2 has three distinct end-members, from fine-grained crystals to slab ice. (2) The growth and retreat of the polar caps observed by MGS is virtually the same as observed by Viking 12 Martian years ago. (3) Unique regions have been identified that appear to differ primarily in the grain size of CO2; one south polar region appears to remain as black slab CO2 ice throughout its sublimation. (4) Regional atmospheric dust is common in localized and regional dust storms around the margin and interior of the southern cap. Analysis of the thermophysical properties of the surface shows that (1) the spatial pattern of albedo has changed since Viking observations, (2) a unique cluster of surface materials with intermediate inertia and albedo occurs that is distinct from the previously identified low-inertia/bright and high-inertia/dark surfaces, and (3) localized patches of high-inertia material have been found in topographic lows and may have been formed by a unique set of aeolian, fluvial, or erosional processes or may be exposed bedrock.

975 citations


Journal ArticleDOI
TL;DR: Although deanitions of human security vary, most formulations emphasize the welfare of ordinary people.
Abstract: Human security is the latest in a long line of neologisms—including common security, global security, cooperative security, and comprehensive security—that encourage policymakers and scholars to think about international security as something more than the military defense of state interests and territory. Although deanitions of human security vary, most formulations emphasize the welfare of ordinary people. Among the most vocal promoters of human security are the governments of Canada and Norway, which have taken the lead in establishing a “human security network” of states and nongovernmental organizations (NGOs) that endorse the concept.1 The term has also begun to appear in academic works,2 and is the subject of new research projects at several major universities.3 Human Security Roland Paris

948 citations


Posted Content
TL;DR: Results suggest that although affective experience may typically be bipolar, the underlying processes, and occasionally the resulting experience of emotion, are better characterized as bivariate.
Abstract: The authors investigated whether people can feel happy and sad at the same time. J. A. Russell and J. M. Carroll's (1999) circumplex model holds that happiness and sadness are polar opposites and, thus, mutually exclusive. In contrast, the evaluative space model (J. T. Cacioppo & G. G. Berntson, 1994) proposes that positive and negative affect are separable and that mixed feelings of happiness and sadness can co-occur. The authors both replicated and extended past research by showing that whereas most participants surveyed in typical situations felt either happy or sad, many participants surveyed immediately after watching the film Life Is Beautiful, moving out of their dormitories, or graduating from college felt both happy and sad. Results suggest that although affective experience may typically be bipolar, the underlying processes, and occasionally the resulting experience of emotion, are better characterized as bivariate.

Journal ArticleDOI
19 Apr 2001-Nature
TL;DR: It is suggested that climate affected erosion mainly by the transition from a period of climate stability, in which landscapes had attained equilibrium configurations, to a time of frequent and abrupt changes in temperature, precipitation and vegetation, which prevented fluvial and glacial systems from establishing equilibrium states.
Abstract: Around the globe, and in a variety of settings including active and inactive mountain belts, increases in sedimentation rates as well as in grain sizes of sediments were recorded at approximately 2-4 Myr ago, implying increased erosion rates. A change in climate represents the only process that is globally synchronous and can potentially account for the widespread increase in erosion and sedimentation, but no single process-like a lowering of sea levels or expanded glaciation-can explain increases in sedimentation in all environments, encompassing continental margins and interiors, and tropical as well as higher latitudes. We suggest that climate affected erosion mainly by the transition from a period of climate stability, in which landscapes had attained equilibrium configurations, to a time of frequent and abrupt changes in temperature, precipitation and vegetation, which prevented fluvial and glacial systems from establishing equilibrium states.

Journal ArticleDOI
TL;DR: This framework suggests that tasks involving rapid, incidental conjunctive learning are better tests of hippocampal function, and is implemented in a computational neural network model that can account for a wide range of data in animal learning.
Abstract: The authors present a theoretical framework for understanding the roles of the hippocampus and neocortex in learning and memory. This framework incorporates a theme found in many theories of hippocampal function: that the hippocampus is responsible for developing conjunctive representations binding together stimulus elements into a unitary representation that can later be recalled from partial input cues. This idea is contradicted by the fact that hippocampally lesioned rats can learn nonlinear discrimination problems that require conjunctive representations. The authors' framework accommodates this finding by establishing a principled division of labor, where the cortex is responsible for slow learning that integrates over multiple experiences to extract generalities whereas the hippocampus performs rapid learning of the arbitrary contents of individual experiences. This framework suggests that tasks involving rapid, incidental conjunctive learning are better tests of hippocampal function. The authors implement this framework in a computational neural network model and show that it can account for a wide range of data in animal learning.

Journal ArticleDOI
TL;DR: It is suggested that changes in the subcellular localization of DAF-16 by environmental cues allows for rapid reallocation of resources in response to a changing environment at all stages of life.

Journal ArticleDOI
TL;DR: In this model, the frontal cortex exhibits robust active maintenance, whereas the basal ganglia contribute a selective, dynamic gating function that enables frontal memory representations to be rapidly updated in a task-relevant manner.
Abstract: The frontal cortex and the basal ganglia interact via a relatively well understood and elaborate system of interconnections. In the context of motor function, these interconnections can be understood as disinhibiting, or “releasing the brakes,” on frontal motor action plans: The basal ganglia detect appropriate contexts for performing motor actions and enable the frontal cortex to execute such actions at the appropriate time. We build on this idea in the domain of working memory through the use of computational neural network models of this circuit. In our model, the frontal cortex exhibits robust active maintenance, whereas the basal ganglia contribute a selective, dynamic gating function that enables frontal memory representations to be rapidly updated in a task-relevant manner. We apply the model to a novel version of the continuous performance task that requires subroutine-like selective working memory updating and compare and contrast our model with other existing models and theories of frontal-cortex-basal-ganglia interactions.

Journal ArticleDOI
TL;DR: This is a tutorial article that reviews the use of partitioned analysis procedures for the analysis of coupled dynamical systems using the partitioned solution approach for multilevel decomposition aimed at massively parallel computation.

Journal ArticleDOI
TL;DR: The objectives, progress, and unfulfilled hopes that have occurred over the last ten years are reviewed, and some interesting computational environments and their underlying conceptual frameworks are illustrated.
Abstract: A fundamental objective of human–computer interaction research is to make systems more usable, more useful, and to provide users with experiences fitting their specific background knowledge and objectives. The challenge in an information-rich world is not only to make information available to people at any time, at any place, and in any form, but specifically to say the “right” thing at the “right” time in the “right” way. Designers of collaborative human–computer systems face the formidable task of writing software for millions of users (at design time) while making it work as if it were designed for each individual user (only known at use time). User modeling research has attempted to address these issues. In this article, I will first review the objectives, progress, and unfulfilled hopes that have occurred over the last ten years, and illustrate them with some interesting computational environments and their underlying conceptual frameworks. A special emphasis is given to high-functionality applications and the impact of user modeling to make them more usable, useful, and learnable. Finally, an assessment of the current state of the art followed by some future challenges is given.

Journal ArticleDOI
TL;DR: The authors argued that the simple gravity equation explains a great deal about the data on bilateral trade flows and is consistent with several theoretical models of trade, including the monopolistic-competition model and the reciprocal-dumping model with free entry.
Abstract: The simple gravity equation explains a great deal about the data on bilateral trade flows and is consistent with several theoretical models of trade. We argue that alternative theories nev- ertheless predict subtle differences in key parameter values, depending on whether goods are homogeneous or differentiated and whether or not there are barriers to entry. Our empirical work for differentiated goods delivers results consistent with the theoretical predictions of the monopolistic-competition model, or a reciprocal-dumping model with free entry. Homo- geneous goods are described by a model with national (Armington) product differentiation or

Journal ArticleDOI
11 Jan 2001-Nature
TL;DR: In situ U–Pb and oxygen isotope results for detrital zircons found within 3-Gyr-old quartzitic rocks in the Murchison District of Western Australia are consistent with the presence of a hydrosphere interacting with the crust by 4,300 Myr ago and are postulated to form from magmas containing a significant component of re-worked continental crust.
Abstract: Granitoid gneisses and supracrustal rocks that are 3,800–4,000 Myr old are the oldest recognized exposures of continental crust1. To obtain insight into conditions at the Earth's surface more than 4 Gyr ago requires the analysis of yet older rocks or their mineral remnants. Such an opportunity is presented by detrital zircons more than 4 Gyr old found within 3-Gyr-old quartzitic rocks in the Murchison District of Western Australia2,3. Here we report in situ U–Pb and oxygen isotope results for such zircons that place constraints on the age and composition of their sources and may therefore provide information about the nature of the Earth's early surface. We find that 3,910–4,280 Myr old zircons have oxygen isotope (δ18O) values ranging from 5.4 ± 0.6‰ to 15.0 ± 0.4‰. On the basis of these results, we postulate that the ∼4,300-Myr-old zircons formed from magmas containing a significant component of re-worked continental crust that formed in the presence of water near the Earth's surface. These data are therefore consistent with the presence of a hydrosphere interacting with the crust by 4,300 Myr ago.

Journal ArticleDOI
03 Aug 2001-Science
TL;DR: An all-optical atomic clock referenced to the 1.064-petahertz transition of a single trapped199Hg+ ion is demonstrated and an upper limit for the fractional frequency instability of 7 × 10−15 is measured in 1 second of averaging—a value substantially better than that of the world's best microwave atomic clocks.
Abstract: Microwave atomic clocks have been the de facto standards for precision time and frequency metrology over the past 50 years, finding widespread use in basic scientific studies, communications, and navigation. However, with its higher operating frequency, an atomic clock based on an optical transition can be much more stable. We demonstrate an all-optical atomic clock referenced to the 1.064-petahertz transition of a single trapped 199Hg+ ion. A clockwork based on a mode-locked femtosecond laser provides output pulses at a 1-gigahertz rate that are phase-coherently locked to the optical frequency. By comparison to a laser-cooled calcium optical standard, an upper limit for the fractional frequency instability of 7 x 10(-15) is measured in 1 second of averaging-a value substantially better than that of the world's best microwave atomic clocks.

Journal ArticleDOI
TL;DR: A rich picture has emerged that combines elements of surfactant adsorption at interfaces and epitaxial growth with the additional complication of long-chain molecules with many degrees of freedom.
Abstract: ▪ Abstract Recent applications of various in situ techniques have dramatically improved our understanding of the self-organization process of adsorbed molecular monolayers on solid surfaces. The process involves several steps, starting with bulk solution transport and surface adsorption and continuing with the two-dimensional organization on the substrate of interest. This later process can involve passage through one or more intermediate surface phases and can often be described using two-dimensional nucleation and growth models. A rich picture has emerged that combines elements of surfactant adsorption at interfaces and epitaxial growth with the additional complication of long-chain molecules with many degrees of freedom.

Journal ArticleDOI
TL;DR: A unified analysis of the statistical behavior of the entire class of ASDs is presented, obtaining statistically identical decompositions in which each ASD is simply decomposed into the nonadaptive matched filter, the non Adaptive cosine or t-statistic, and three other statistically independent random variables that account for the performance-degrading effects of limited training data.
Abstract: We use the theory of generalized likelihood ratio tests (GLRTs) to adapt the matched subspace detectors (MSDs) of Scharf (1991) and of Scharf and Frielander (1994) to unknown noise covariance matrices. In so doing, we produce adaptive MSDs that may be applied to signal detection for radar, sonar, and data communication. We call the resulting detectors adaptive subspace detectors (ASDs). These include Kelly's (1987) GLRT and the adaptive cosine estimator (ACE) of Kaurt and Scharh (see ibid., vol.47, p.2538-41, 1999) and of Scharf and McWhorter (see Proc. 30th Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, 1996) for scenarios in which the scaling of the test data may deviate from that of the training data. We then present a unified analysis of the statistical behavior of the entire class of ASDs, obtaining statistically identical decompositions in which each ASD is simply decomposed into the nonadaptive matched filter, the nonadaptive cosine or t-statistic, and three other statistically independent random variables that account for the performance-degrading effects of limited training data.

Journal ArticleDOI
19 Jul 2001-Nature
TL;DR: In this article, the authors explore the dynamics of how a Bose-Einstein condensate collapses and subsequently explodes when the balance of forces governing its size and shape is suddenly altered.
Abstract: When atoms in a gas are cooled to extremely low temperatures, they will-under the appropriate conditions-condense into a single quantum-mechanical state known as a Bose-Einstein condensate. In such systems, quantum-mechanical behaviour is evident on a macroscopic scale. Here we explore the dynamics of how a Bose-Einstein condensate collapses and subsequently explodes when the balance of forces governing its size and shape is suddenly altered. A condensate's equilibrium size and shape is strongly affected by the interatomic interactions. Our ability to induce a collapse by switching the interactions from repulsive to attractive by tuning an externally applied magnetic field yields detailed information on the violent collapse process. We observe anisotropic atom bursts that explode from the condensate, atoms leaving the condensate in undetected forms, spikes appearing in the condensate wavefunction and oscillating remnant condensates that survive the collapse. All these processes have curious dependences on time, on the strength of the interaction and on the number of condensate atoms. Although the system would seem to be simple and well characterized, our measurements reveal many phenomena that challenge theoretical models.

Journal ArticleDOI
TL;DR: In this paper, the authors present a general numerical model of DOC dynamics and test the sensitivity of the model to variation in the controlling parameters to highlight both the significance of DOC fluxes to terrestrial carbon processes and the key uncertainties that require additional experiments and data.
Abstract: The movement of dissolved organic carbon (DOC) through soils is an important process for the transport of carbon within ecosystems and the formation of soil organic matter. In some cases, DOC fluxes may also contribute to the carbon balance of terrestrial ecosystems; in most ecosystems, they are an important source of energy, carbon, and nutrient transfers from terrestrial to aquatic ecosystems. Despite their importance for terrestrial and aquatic biogeochemistry, these fluxes are rarely represented in conceptual or numerical models of terrestrial biogeochemistry. In part, this is due to the lack of a comprehensive understanding of the suite of processes that control DOC dynamics in soils. In this article, we synthesize information on the geochemical and biological factors that control DOC fluxes through soils. We focus on conceptual issues and quantitative evaluations of key process rates to present a general numerical model of DOC dynamics. We then test the sensitivity of the model to variation in the controlling parameters to highlight both the significance of DOC fluxes to terrestrial carbon processes and the key uncertainties that require additional experiments and data. Simulation model results indicate the importance of representing both root carbon inputs and soluble carbon fluxes to predict the quantity and distribution of soil carbon in soil layers. For a test case in a temperate forest, DOC contributed 25% of the total soil profile carbon, whereas roots provided the remainder. The analysis also shows that physical factors—most notably, sorption dynamics and hydrology—play the dominant role in regulating DOC losses from terrestrial ecosystems but that interactions between hydrology and microbial–DOC relationships are important in regulating the fluxes of DOC in the litter and surface soil horizons. The model also indicates that DOC fluxes to deeper soil layers can support a large fraction (up to 30%) of microbial activity below 40 cm.

Book ChapterDOI
21 Aug 2001
TL;DR: This paper makes meta- learning in large systems feasible by using recurrent neural networks with attendant learning routines as meta-learning systems and shows that the approach to gradient descent methods forms non-stationary time series prediction.
Abstract: This paper introduces the application of gradient descent methods to meta-learning. The concept of "meta-learning", i.e. of a system that improves or discovers a learning algorithm, has been of interest in machine learning for decades because of its appealing applications. Previous meta-learning approaches have been based on evolutionary methods and, therefore, have been restricted to small models with few free parameters. We make meta-learning in large systems feasible by using recurrent neural networks withth eir attendant learning routines as meta-learning systems. Our system derived complex well performing learning algorithms from scratch. In this paper we also show that our approachp erforms non-stationary time series prediction.

Journal ArticleDOI
TL;DR: In this paper, the authors report evidence on the effectiveness of the Balanced Scorecard (BSC) as a strategy communication and management control device and provide a model of communication and control applicable to the BSC.
Abstract: This paper reports evidence on the effectiveness of the Balanced Scorecard (BSC) as a strategy communication and management‐control device. This study first reviews communication and management control literatures that identify attributes of effective communication and control of strategy. Second, the study offers a model of communication and control applicable to the BSC. The study then analyzes empirical interview and archival data to model the use and assess the communication and control effectiveness of the BSC. The study includes data from multiple divisions of a large, international manufacturing company. Data are from BSC designers, administrators, and North American managers whose divisions are objects of the BSC. The study accumulates evidence regarding the challenges of designing and implementing the BSC faced by even a large, well‐funded company. These findings may be general‐izable to other companies adopting or considering adopting the BSC as a strategic and management control device. Data in...

Journal ArticleDOI
TL;DR: This paper presents a dual–primal formulation of the FETI‐2 concept that eliminates the need for that second set of Lagrange multipliers, and unifies all previously developed one‐level and two‐level FETi algorithms into a single dual‐primal FetI‐DP method.
Abstract: The FETI method and its two-level extension (FETI-2) are two numerically scalable domain decomposition methods with Lagrange multipliers for the iterative solution of second-order solid mechanics and fourth-order beam, plate and shell structural problems, respectively.The FETI-2 method distinguishes itself from the basic or one-level FETI method by a second set of Lagrange multipliers that are introduced at the subdomain cross-points to enforce at each iteration the exact continuity of a subset of the displacement field at these specific locations. In this paper, we present a dual–primal formulation of the FETI-2 concept that eliminates the need for that second set of Lagrange multipliers, and unifies all previously developed one-level and two-level FETI algorithms into a single dual–primal FETI-DP method. We show that this new FETI-DP method is numerically scalable for both second-order and fourth-order problems. We also show that it is more robust and more computationally efficient than existing FETI solvers, particularly when the number of subdomains and/or processors is very large. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Brown dwarfs are assumed to be stellar embryos for which the star formation process was aborted before the hydrostatic cores could build up enough mass to eventually start hydrogen burning as discussed by the authors, which explains the rarity of brown dwarfs as close companions to normal stars, the absence of wide brown dwarf binaries, and the flattening of the low mass end of the initial mass function.
Abstract: We conjecture that brown dwarfs are substellar objects because they have been ejected from small newborn multiple systems that have decayed in dynamical interactions. In this view, brown dwarfs are stellar embryos for which the star formation process was aborted before the hydrostatic cores could build up enough mass to eventually start hydrogen burning. The disintegration of a small multiple system is a stochastic process, which can be described only in terms of the half-life of the decay. A stellar embryo competes with its siblings in order to accrete infalling matter, and the one that grows slowest is most likely to be ejected. With better luck, a brown dwarf would therefore have become a normal star. This interpretation of brown dwarfs readily explains the rarity of brown dwarfs as close companions to normal stars, the absence of wide brown dwarf binaries, and the flattening of the low-mass end of the initial mass function. Possible observational tests of this scenario include statistics of brown dwarfs near Class 0 sources and the kinematics of brown dwarfs in star-forming regions, while they still retain a kinematic signature of their expulsion. Because the ejection process limits the amount of gas brought along in a disk, it is predicted that substellar equivalents to the classical T Tauri stars should be rather short-lived.

Journal ArticleDOI
TL;DR: In this paper, the mass-ejection history of the newly born driving sources and their mass-accretion history is reconstructed using the Herbig-Haro (HH) objects.
Abstract: ▪ Abstract Outflow activity is associated with all stages of early stellar evolution, from deeply embedded protostellar objects to visible young stars. Herbig-Haro (HH) objects are the optical manifestations of this powerful mass loss. Analysis of HH flows, and in particular of the subset of highly collimated HH jets, provides indirect but important insights into the nature of the accretion and mass-loss processes that govern the formation of stars. The recent recognition that HH flows may attain parsec-scale dimensions opens up the possibility of partially reconstructing the mass-ejection history of the newly born driving sources and, therefore, their mass-accretion history. Furthermore, HH flows are astrophysical laboratories for the analysis of shock structures, of hydrodynamics in collimated flows, and of their interaction with the surrounding environment. HH flows may be an important source of turbulence in molecular clouds. Recent technological developments have enabled detailed observations of outf...

Journal ArticleDOI
TL;DR: The selection of preferred step width in human walking is studied by measuring mechanical and metabolic costs as a function of experimentally manipulated step width and humans appear to prefer a step width that minimizes metabolic cost.
Abstract: We studied the selection of preferred step width in human walking by measuring mechanical and metabolic costs as a function of experimentally manipulated step width (0.00^0.45L, as a fraction of leg length L). We estimated mechanical costs from individual limb external mechanical work and metabolic costs using open circuit respirometry. The mechanical and metabolic costs both increased substantially (54 and 45%, respectively) for widths greater than the preferred value (0.15^0.45L) and with step width squared (R 2 ˆ 0.91 and 0.83, respectively). As predicted by a three-dimensional model of walking mechanics, the increases in these costs appear to be a result of the mechanical work required for redirecting the centre of mass velocity during the transition between single stance phases (step-to-step transition costs). The metabolic cost for steps narrower than preferred (0.10^0.00L) increased by 8%, which was probably as a result of the added cost of moving the swing leg laterally in order to avoid the stance leg (lateral limb swing cost). Trade-ois between the step-to-step transition and lateral limb swing costs resulted in a minimum metabolic cost at a step width of 0.12L, which is not signi¢cantly diierent from foot width (0.11L) or the preferred step width (0.13L). Humans appear to prefer a step width that minimizes metabolic cost.