scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 2006"


Journal ArticleDOI
TL;DR: In this article, the authors address four related issues related to exploration and exploitation in organizational adaptation research, and propose a framework to address them in the context of organizational adaptation and exploitation.
Abstract: Exploration and exploitation have emerged as the twin concepts underpinning organizational adaptation research, yet some central issues related to them remain ambiguous. We address four related que...

2,832 citations


Journal ArticleDOI
TL;DR: In this article, eleven coupled climate-carbon cycle models were used to study the coupling between climate change and the carbon cycle. But, there was still a large uncertainty on the magnitude of these sensitivities.
Abstract: Eleven coupled climate–carbon cycle models used a common protocol to study the coupling between climate change and the carbon cycle. The models were forced by historical emissions and the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) A2 anthropogenic emissions of CO2 for the 1850–2100 time period. For each model, two simulations were performed in order to isolate the impact of climate change on the land and ocean carbon cycle, and therefore the climate feedback on the atmospheric CO2 concentration growth rate. There was unanimous agreement among the models that future climate change will reduce the efficiency of the earth system to absorb the anthropogenic carbon perturbation. A larger fraction of anthropogenic CO2 will stay airborne if climate change is accounted for. By the end of the twenty-first century, this additional CO2 varied between 20 and 200 ppm for the two extreme models, the majority of the models lying between 50 and 100 ppm. The higher CO2 levels led to an additional climate warming ranging between 0.1° and 1.5°C. All models simulated a negative sensitivity for both the land and the ocean carbon cycle to future climate. However, there was still a large uncertainty on the magnitude of these sensitivities. Eight models attributed most of the changes to the land, while three attributed it to the ocean. Also, a majority of the models located the reduction of land carbon uptake in the Tropics. However, the attribution of the land sensitivity to changes in net primary productivity versus changes in respiration is still subject to debate; no consensus emerged among the models.

2,630 citations


Journal ArticleDOI
TL;DR: The Community Climate System Model version 3 (CCSM3) as discussed by the authors is a coupled climate model with components representing the atmosphere, ocean, sea ice, and land surface connected by a flux coupler.
Abstract: The Community Climate System Model version 3 (CCSM3) has recently been developed and released to the climate community. CCSM3 is a coupled climate model with components representing the atmosphere, ocean, sea ice, and land surface connected by a flux coupler. CCSM3 is designed to produce realistic simulations over a wide range of spatial resolutions, enabling inexpensive simulations lasting several millennia or detailed studies of continental-scale dynamics, variability, and climate change. This paper will show results from the configuration used for climate-change simulations with a T85 grid for the atmosphere and land and a grid with approximately 1° resolution for the ocean and sea ice. The new system incorporates several significant improvements in the physical parameterizations. The enhancements in the model physics are designed to reduce or eliminate several systematic biases in the mean climate produced by previous editions of CCSM. These include new treatments of cloud processes, aerosol ...

2,500 citations


Proceedings Article
08 Aug 2006
TL;DR: A new, intuitive measure for evaluating machine translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments is defined.
Abstract: We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We show that the single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU. We also define a human-targeted TER (or HTER) and show that it yields higher correlations with human judgments than BLEU—even when BLEU is given human-targeted references. Our results indicate that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate with human judgments as well as—or better than—a second human judgment does.

2,210 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed and tested a theory of how human resource practices affect the organizational social climate conditions that facilitate knowledge exchange and combination and resultant resultant knowledge creation and integration.
Abstract: In this study, we developed and tested a theory of how human resource practices affect the organizational social climate conditions that facilitate knowledge exchange and combination and resultant ...

1,778 citations


Book
01 Jan 2006
TL;DR: This chapter discusses Classical Planning and its Applications, as well as Neoclassical and Neo-Classical Techniques, and discusses search procedures and Computational Complexity.
Abstract: 1 Introduction and Overview I Classical Planning 2 Representations for Classical Planning*3 Complexity of Classical Planning*4 State-Space Planning*5 Plan-Space Planning II Neoclassical Planning 6 Planning-Graph Techniques*7 Propositional Satisfiability Techniques*8 Constraint Satisfaction Techniques III Heuristics and Control Strategies 9 Heuristics in Planning*10 Control Rules in Planning*11 Hierarchical Task Network Planning*12 Control Strategies in Deductive Planning IV Planning with Time and Resources 13 Time for Planning*14 Temporal Planning*15 Planning and Resource Scheduling V Planning under Uncertainty 16 Planning based on Markov Decision Processes*17 Planning based on Model Checking*18 Uncertainty with Neo-Classical Techniques VI Case Studies and Applications 19 Space Applications*20 Planning in Robotics*21 Planning for Manufacturability Analysis*22 Emergency Evacuation Planning *23 Planning in the Game of Bridge VII Conclusion 24 Conclusion and Other Topics VIII Appendices A Search Procedures and Computational Complexity*B First Order Logic*C Model Checking

1,612 citations


MonographDOI
TL;DR: Laub and Sampson as mentioned in this paper analyzed newly collected data on crime and social development up to age 70 for 500 men who were remanded to reform school in the 1940s and found that men who desisted from crime were rooted in structural routines and had strong social ties to family and community.
Abstract: This text analyses newly collected data on crime and social development up to age 70 for 500 men who were remanded to reform school in the 1940s. Born in Boston in the late 1920s and early 1930s, these men were the subjects of the classic study "Unraveling Juvenile Delinquency" by Sheldon and Eleanor Glueck (1950). Updating their lives at the close of the twentieth century, and connecting their adult experience to childhood, this book is arguably the longest longitudinal study of age, crime and the life course to date. John Laub and Robert Sampson's long-term data, combined with in-depth interviews, defy the conventional wisdom that links individual traits such as poor verbal skills, limited self-control and difficult temperament to long-term trajectories of offending. The authors reject the idea of categorizing offenders to reveal etiologies of offending - rather, they connect variability in behaviour to social context. They find that men who desisted from crime were rooted in structural routines and had strong social ties to family and community. By uniting life-history narratives with rigorous data analysis, the authors shed new light on long-term trajectories of crime and current policies of crime control.

1,587 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarized the effectiveness of specific, difficult goals; the relationship of goals to affect; the mediators of goal effects; the relation of goals with self-efficacy; the moderators of goal effect; and the generality of goals across people, tasks, countries, time spans, experimental designs, goal sources (i.e., self-set, set jointly with others, and dependent variables).
Abstract: Goal-setting theory is summarized regarding the effectiveness of specific, difficult goals; the relationship of goals to affect; the mediators of goal effects; the relation of goals to self-efficacy; the moderators of goal effects; and the generality of goal effects across people, tasks, countries, time spans, experimental designs, goal sources (i.e., self-set, set jointly with others, or assigned), and dependent variables. Recent studies concerned with goal choice and the factors that influence it, the function of learning goals, the effect of goal framing, goals and affect (well-being), group goal setting, goals and traits, macro-level goal setting, and conscious versus subconscious goals are described. Suggestions are given for future research.

1,518 citations


Journal ArticleDOI
TL;DR: In this article, the authors surveyed management teams in 102 hotel properties in the United States to examine the intervening roles of knowledge sharing and team efficacy in the relationship between empowering leadership and team performance.
Abstract: We surveyed management teams in 102 hotel properties in the United States to examine the intervening roles of knowledge sharing and team efficacy in the relationship between empowering leadership and team performance. Team performance was measured through a time-lagged market-based source. Results showed that empowering leadership was positively related to both knowledge sharing and team efficacy, which, in turn, were both positively related to performance.

1,470 citations


Journal ArticleDOI
TL;DR: Metacognition and Learning as discussed by the authors is a journal dedicated to the study of metacognitions and all its aspects within a broad context of learning processes, and it is the first issue of MetACognition & Learning Journal.
Abstract: This is the first issue of Metacognition and Learning, a new international journal dedicated to the study of metacognition and all its aspects within a broad context of learning processes. Flavell coined the term metacognition in the seventies of the last century (Flavell, 1979) and, since then, a huge amount of research has emanated from his initial efforts. Do we need metacognition as a concept in learning theory? Already in 1978, Brown posed the question whether metacognition was an epiphenomenon. Apparently, she was convinced otherwise as she has been working fruitfully for many years in the area of metacognition. Moreover, a review study by Wang, Haertel, and Walberg (1990) revealed metacognition to be a most powerful predictor of learning. Metacognition matters, but there are many unresolved issues that need further investigation. This introduction will present ten such issues, which are by no means exhaustive. They merely indicate what themes might be relevant to the journal.

1,470 citations


Journal ArticleDOI
TL;DR: Initial comparisons with ground-based optical thickness measurements and simultaneously acquired MODIS imagery indicate comparable uncertainty in Landsat surface reflectance compared to the standard MODIS reflectance product.
Abstract: The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center has processed and released 2100 Landsat Thematic Mapper and Enhanced Thematic Mapper Plus surface reflectance scenes, providing 30-m resolution wall-to-wall reflectance coverage for North America for epochs centered on 1990 and 2000. This dataset can support decadal assessments of environmental and land-cover change, production of reflectance-based biophysical products, and applications that merge reflectance data from multiple sensors [e.g., the Advanced Spaceborne Thermal Emission and Reflection Radiometer, Multiangle Imaging Spectroradiometer, Moderate Resolution Imaging Spectroradiometer (MODIS)]. The raw imagery was obtained from the orthorectified Landsat GeoCover dataset, purchased by NASA from the Earth Satellite Corporation. Through the LEDAPS project, these data were calibrated, converted to top-of-atmosphere reflectance, and then atmospherically corrected using the MODIS/6S methodology. Initial comparisons with ground-based optical thickness measurements and simultaneously acquired MODIS imagery indicate comparable uncertainty in Landsat surface reflectance compared to the standard MODIS reflectance product (the greater of 0.5% absolute reflectance or 5% of the recorded reflectance value). The rapid automated nature of the processing stream also paves the way for routine high-level products from future Landsat sensors.

Posted Content
TL;DR: The authors analyzes a marked change in the evolution of the U.S. wage structure over the past fifteen years: divergent trends in upper-tail and lower-tail (50/10) wage inequality, with employment polarizing into high-wage and low-wage jobs at the expense of middle-wage work.
Abstract: This paper analyzes a marked change in the evolution of the U.S. wage structure over the past fifteen years: divergent trends in upper-tail (90/50) and lower-tail (50/10) wage inequality. We document that wage inequality in the top half of distribution has displayed an unchecked and rather smooth secular rise for the last 25 years (since 1980). Wage inequality in the bottom half of the distribution also grew rapidly from 1979 to 1987, but it has ceased growing (and for some measures actually narrowed) since the late 1980s. Furthermore we find that occupational employment growth shifted from monotonically increasing in wages (education) in the 1980s to a pattern of more rapid growth in jobs at the top and bottom relative to the middles of the wage (education) distribution in the 1990s. We characterize these patterns as the %u201Cpolarization%u201D of the U.S. labor market, with employment polarizing into high-wage and low-wage jobs at the expense of middle-wage work. We show how a model of computerization in which computers most strongly complement the non-routine (abstract) cognitive tasks of high-wage jobs, directly substitute for the routine tasks found in many traditional middle-wage jobs, and may have little direct impact on non-routine manual tasks in relatively low-wage jobs can help explain the observed polarization of the U.S. labor market.

Journal ArticleDOI
TL;DR: The authors proposed that overall job attitude (job satisfaction and organizational commitment) provides increasingly powerful prediction of more integrative behavioral criteria (focal performance, contextual performance, lateness, absence, and turnover combined).
Abstract: Drawing on the compatibility principle in attitude theory, we propose that overall job attitude (job satisfaction and organizational commitment) provides increasingly powerful prediction of more integrative behavioral criteria (focal performance, contextual performance, lateness, absence, and turnover combined). The principle was sustained by a combination of meta-analysis and structural equations showing better fit of unified versus diversified models of meta-analytic correlations between those criteria. Overall job attitude strongly predicted a higher-order behavioral construct, defined as desirable contributions made to one’s work role (r .59). Time-lagged data also supported this unified, attitude-engagement model.

Journal ArticleDOI
TL;DR: A consensus development conference was held to review the data relating to the existence of separate domains within negative symptoms, as a prerequisite for choosing appropriate measures of these domains in clinical trials and to examine issues that may interfere with treatment development.
Abstract: The impairments now called negative symptoms have long been noted as common features of schizophrenia, and the concept of negative symptoms itself has a long history.1,2 Patients who exhibit significant negative symptoms have particularly poor function and quality of life,3–8 and this aspect of schizophrenia has been proposed as a separate domain with distinctive pathophysiological and therapeutic implications since at least 1974.9 Despite the attention these problems receive, no drug has received Food and Drug Administration (FDA) approval for an indication of negative symptoms, and available data indicate that second-generation antipsychotic medications have not met early hopes for a highly effective treatment for alleviation of negative symptoms.10 Because of limited progress in the development of effective treatments for negative symptoms, under the auspices of the National Institute of Mental Health (NIMH), Drs. Steve Marder, Wayne Fenton, William T. Carpenter, Jr, and Brian Kirkpatrick initiated a process to examine issues that may interfere with treatment development. The NIMH had previously focused attention on impaired cognition as a therapeutic target with the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) project. The success of the MATRICS process suggested similar progress could be made in the area of negative symptoms and provided a possible model for proceeding in the area of negative symptoms. Marder, Fenton, Carpenter, and Kirkpatrick organized a consensus development conference, which was held at the NIMH Neuroscience Center in Rockville, Maryland, on January 26–27, 2005. Those attending are listed in the appendix. The mission statement of the meeting was: To review the data relating to the existence of separate domains within negative symptoms, as a prerequisite for choosing appropriate measures of these domains in clinical trials. To initiate a process for developing or identifying widely acceptable, evidence-based measures and methodologies needed to establish the efficacy of treatments that target negative symptoms. Prior to the meeting, the organizers asked experts to address a series of questions: What are the separate components of negative symptoms? Are they independent, or components of the same latent construct? Which aspect of each domain belongs to the negative symptom construct? Does this area need a separate assessment? What is the best assessment method for clinical trials? Since research has suggested that both negative symptoms and cognitive impairments were significant determinants of poor outcome in schizophrenia, an additional set of questions related to the relationship between these domains of psychopathology was also addressed at the conference: Which aspects of cognition are part of the negative symptom construct? Which are independent? Which are uncertain? Articles that more fully address the topics of these presentations can be found in this issue of Schizophrenia Bulletin. Those articles address regulatory issues and negative symptoms,11 negative symptoms as a therapeutic target,12 the factor structure of negative symptoms,13 restricted affect,14 anhedonia,15 and the relationship between negative symptoms and cognitive impairment.16 At the conference other presentations were also made: Wayne Fenton spoke on “Meeting Goals and Objectives: The NIMH Perspective,” Robert Buchanan on “Summary of the MATRICS Process,” William Carpenter, Jr, on “Study Design and the “Pseudospecificity' Problem,” Michael Green on “Social Cognition,” Nancy Andreasen on “Alogia,” and Jeffrey Cummings on “Apathy.”

Journal ArticleDOI
TL;DR: In this paper, a new global distribution map of tropical dry forests derived from the recently developed MODIS Vegetation Continuous Fields (VCF) product was presented, which depicts percentage tree cover at a resolution of 500 m, combined with previously defined maps of biomes.
Abstract: Aim To analyse the conservation status of tropical dry forests at the global scale, by combining a newly developed global distribution map with spatial data describing different threats, and to identify the relative exposure of different forest areas to such threats. Location Global assessment. Methods We present a new global distribution map of tropical dry forest derived from the recently developed MODIS Vegetation Continuous Fields (VCF) product, which depicts percentage tree cover at a resolution of 500 m, combined with previously defined maps of biomes. This distribution map was overlaid with spatial data to estimate the exposure of tropical dry forests to a number of different threats: climate change, habitat fragmentation, fire, human population density and conversion to cropland. The extent of tropical dry forest currently protected was estimated by overlaying the forest map with a global data set of the distribution of protected areas. Results It is estimated that 1,048,700 km 2 of tropical dry forest remains, distributed throughout the three tropical regions. More than half of the forest area (54.2%) is located within South America, the remaining area being almost equally divided between North and Central America, Africa and Eurasia, with a relatively small proportion (3.8%) occurring within Australasia and Southeast Asia. Overall, c. 97% of the remaining area of tropical dry forest is at risk from one or more of the threats considered, with highest percentages recorded for Eurasia. The relative exposure to different threats differed between regions: while climate change is relatively significant in the Americas, habitat fragmentation and fire affect a higher proportion of African forests, whereas agricultural conversion and human population density are most influential in Eurasia. Evidence suggests that c. 300,000 km 2 of tropical dry forest now coincide with some form of protected area, with 71.8% of this total being located within South America. Main conclusions Virtually all of the tropical dry forests that remain are currently exposed to a variety of different threats, largely resulting from human activity. Taking their high biodiversity value into consideration, this indicates that tropical dry forests should be accorded high conservation priority. The results presented here could be used to identify which forest areas should be accorded highest priority for conservation action. In particular, the expansion of the global protected area network, particularly in Mesoamerica, should be given urgent consideration.

Journal ArticleDOI
10 Nov 2006-Science
TL;DR: The sequence and analysis of the 814-megabase genome of the sea urchin Strongylocentrotus purpuratus is reported, a model for developmental and systems biology and yields insights into the evolution of deuterostomes.
Abstract: We report the sequence and analysis of the 814-megabase genome of the sea urchin Strongylocentrotus purpuratus, a model for developmental and systems biology. The sequencing strategy combined whole-genome shotgun and bacterial artificial chromosome (BAC) sequences. This use of BAC clones, aided by a pooling strategy, overcame difficulties associated with high heterozygosity of the genome. The genome encodes about 23,300 genes, including many previously thought to be vertebrate innovations or known only outside the deuterostomes. This echinoderm genome provides an evolutionary outgroup for the chordates and yields insights into the evolution of deuterostomes.

Journal ArticleDOI
TL;DR: The scope of the thresholds concept in ecological science is defined and methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales are discussed.
Abstract: An ecological threshold is the point at which there is an abrupt change in an ecosystem quality, property or phenomenon, or where small changes in an environmental driver produce large responses in the ecosystem. Analysis of thresholds is complicated by nonlinear dynamics and by multiple factor controls that operate at diverse spatial and temporal scales. These complexities have challenged the use and utility of threshold concepts in environmental management despite great concern about preventing dramatic state changes in valued ecosystems, the need for determining critical pollutant loads and the ubiquity of other threshold-based environmental problems. In this paper we define the scope of the thresholds concept in ecological science and discuss methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales. We end with a discussion of key research needs in this area.

Journal ArticleDOI
TL;DR: This framework could allow us to understand, for the first time, the genetic basis of ecosystem processes, and the effect of such phenomena as climate change and introduced transgenic organisms on entire communities.
Abstract: Can heritable traits in a single species affect an entire ecosystem? Recent studies show that such traits in a common tree have predictable effects on community structure and ecosystem processes. Because these 'community and ecosystem phenotypes' have a genetic basis and are heritable, we can begin to apply the principles of population and quantitative genetics to place the study of complex communities and ecosystems within an evolutionary framework. This framework could allow us to understand, for the first time, the genetic basis of ecosystem processes, and the effect of such phenomena as climate change and introduced transgenic organisms on entire communities.

Journal ArticleDOI
TL;DR: Pasture remains the dominant land use after forest clearing in Mato Grosso, but the growing importance of larger and faster conversion of forest to cropland defines a new paradigm of forest loss in Amazonia and refutes the claim that agricultural intensification does not lead to new deforestation.
Abstract: Intensive mechanized agriculture in the Brazilian Amazon grew by >3.6 million hectares (ha) during 2001–2004. Whether this cropland expansion resulted from intensified use of land previously cleared for cattle ranching or new deforestation has not been quantified and has major implications for future deforestation dynamics, carbon fluxes, forest fragmentation, and other ecosystem services. We combine deforestation maps, field surveys, and satellite-based information on vegetation phenology to characterize the fate of large (>25-ha) clearings as cropland, cattle pasture, or regrowing forest in the years after initial clearing in Mato Grosso, the Brazilian state with the highest deforestation rate and soybean production since 2001. Statewide, direct conversion of forest to cropland totaled >540,000 ha during 2001–2004, peaking at 23% of 2003 annual deforestation. Cropland deforestation averaged twice the size of clearings for pasture (mean sizes, 333 and 143 ha, respectively), and conversion occurred rapidly; >90% of clearings for cropland were planted in the first year after deforestation. Area deforested for cropland and mean annual soybean price in the year of forest clearing were directly correlated (R2 = 0.72), suggesting that deforestation rates could return to higher levels seen in 2003–2004 with a rebound of crop prices in international markets. Pasture remains the dominant land use after forest clearing in Mato Grosso, but the growing importance of larger and faster conversion of forest to cropland defines a new paradigm of forest loss in Amazonia and refutes the claim that agricultural intensification does not lead to new deforestation.

Journal ArticleDOI
05 Oct 2006-Nature
TL;DR: Here it is shown that electrons gain kinetic energy by reflecting from the ends of the contracting ‘magnetic islands’ that form as reconnection proceeds, analogous to the increase of energy of a ball reflecting between two converging walls.
Abstract: Electrons gain kinetic energy by reflecting from the ends of the contracting 'magnetic islands' that form as reconnection proceeds. The repetitive interaction of electrons with many islands allows large numbers to be efficiently accelerated to high energy. A long-standing problem in the study of space and astrophysical plasmas is to explain the production of energetic electrons as magnetic fields ‘reconnect’ and release energy. In the Earth's magnetosphere, electron energies reach hundreds of thousands of electron volts (refs 1–3), whereas the typical electron energies associated with large-scale reconnection-driven flows are just a few electron volts. Recent observations further suggest that these energetic particles are produced in the region where the magnetic field reconnects4. In solar flares, upwards of 50 per cent of the energy released can appear as energetic electrons5,6. Here we show that electrons gain kinetic energy by reflecting from the ends of the contracting ‘magnetic islands’ that form as reconnection proceeds. The mechanism is analogous to the increase of energy of a ball reflecting between two converging walls—the ball gains energy with each bounce. The repetitive interaction of electrons with many islands allows large numbers to be efficiently accelerated to high energy. The back pressure of the energetic electrons throttles reconnection so that the electron energy gain is a large fraction of the released magnetic energy. The resultant energy spectra of electrons take the form of power laws with spectral indices that match the magnetospheric observations.

Journal ArticleDOI
TL;DR: Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in Denitrification methods over the next few years.
Abstract: Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3-) and nitrite (NO2-), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

Journal ArticleDOI
02 Nov 2006-Nature
TL;DR: The observation of self-cooling of a micromirror by radiation pressure inside a high-finesse optical cavity is reported, indicating changes in intensity in a detuned cavity, provide the mechanism for entropy flow from the mirror’s oscillatory motion to the low-entropy cavity field.
Abstract: Cooling of mechanical resonators is currently a popular topic in many fields of physics including ultra-high precision measurements1, detection of gravitational waves, and the study of the transition between classical and quantum behaviour of a mechanical system. Here we report the observation of self-cooling of a micromirror by radiation pressure inside a high-finesse optical cavity. In essence, changes in intensity in a detuned cavity, as caused by the thermal vibration of the mirror, provide the mechanism for entropy flow from the mirror's oscillatory motion to the low-entropy cavity field. The crucial coupling between radiation and mechanical motion was made possible by producing free-standing micromirrors of low mass (m ≈ 400 ng), high reflectance (more than 99.6%) and high mechanical quality (Q ≈ 10,000). We observe cooling of the mechanical oscillator by a factor of more than 30; that is, from room temperature to below 10 K. In addition to purely photothermal effects we identify radiation pressure as a relevant mechanism responsible for the cooling. In contrast with earlier experiments, our technique does not need any active feedback. We expect that improvements of our method will permit cooling ratios beyond 1,000 and will thus possibly enable cooling all the way down to the quantum mechanical ground state of the micromirror.

Journal ArticleDOI
TL;DR: A side‐by‐side comparison of the three cell types is presented, showing that Mφ‐II more closely resemble Ca‐Mφ than they are to AA‐M φ, and it is shown that both have been classified as M2 M⩽, distinct from Ca‐mφ.
Abstract: We generated three populations of macrophages (Mphi) in vitro and characterized each. Classically activated Mphi (Ca-Mphi) were primed with IFN-gamma and stimulated with LPS. Type II-activated Mphi (Mphi-II) were similarly primed but stimulated with LPS plus immune complexes. Alternatively activated Mphi (AA-Mphi) were primed overnight with IL-4. Here, we present a side-by-side comparison of the three cell types. We focus primarily on differences between Mphi-II and AA-Mphi, as both have been classified as M2 Mphi, distinct from Ca-Mphi. We show that Mphi-II more closely resemble Ca-Mphi than they are to AA-Mphi. Mphi-II and Ca-Mphi, but not AA-Mphi, produce high levels of NO and have low arginase activity. AA-Mphi express FIZZ1, whereas neither Mphi-II nor Ca-Mphi do. Mphi-II and Ca-Mphi express relatively high levels of CD86, whereas AA-Mphi are virtually devoid of this costimulatory molecule. Ca-Mphi and Mphi-II are efficient APC, whereas AA-Mphi fail to stimulate efficient T cell proliferation. The differences between Ca-Mphi and Mphi-II are more subtle. Ca-Mphi produce IL-12 and give rise to Th1 cells, whereas Mphi-II produce high levels of IL-10 and thus, give rise to Th2 cells secreting IL-4 and IL-10. Mphi-II express two markers that may be used to identify them in tissue. These are sphingosine kinase-1 and LIGHT (TNF superfamily 14). Thus, Ca-Mphi, Mphi-II, and AA-Mphi represent three populations of cells with different biological functions.

Journal ArticleDOI
TL;DR: In this paper, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity and that satisfied customers are economic assets with high returns/low risk.
Abstract: Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm's satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inco...

Journal ArticleDOI
TL;DR: In this article, the authors explain why GLOBE used a set of cultural values and practices to measure national cultures and show that there is no theoretical or empirical basis for Hofstede's criticism that these measures of values are too abstract or for his contention that national and organizational cultures are phenomena of different order.
Abstract: This paper explains why GLOBE used a set of cultural values and practices to measure national cultures. We show why there is no theoretical or empirical basis for Hofstede's criticism that GLOBE measures of values are too abstract or for his contention that national and organizational cultures are phenomena of different order. We also show why Hofstede has a limited understanding of the relationship between national wealth and culture. Furthermore, we explain why Hofstede's reanalysis of the GLOBE data is inappropriate and produces incomprehensible results. We also show the validity of managerial samples in studying leadership. Finally, we explain why Hofstede's claim that GLOBE instruments reflect researchers psycho-logic reveals ignorance of psychometric methodologies designed to ensure scale reliability and construct validity.

Journal ArticleDOI
01 Oct 2006
TL;DR: The authors analyzed how online reviews can be used to evaluate product differentiation strategy based on the theories of hyperdifferentiation and resonance marketing and found that the variance of ratings and the strength of the top quartile of reviews play a significant role in determining which new products grow in the marketplace (resonance).
Abstract: We analyze how online reviews can be used to evaluate product differentiation strategy based on the theories of hyperdifferentiation and resonance marketing. Hyperdifferentiation says that firms can now produce almost anything that will appeal to consumers and can manage the complexity of diverse product portfolios. Resonance marketing says that informed consumers will only purchase products that they actually truly want. When consumers become more informed, firms that provide highly differentiated products should experience higher growth rate. We construct measures of product positioning based on online ratings and find supportive evidence using craft beer industry data. In particular, we find that the variance of ratings and the strength of the top quartile of reviews play a significant role in determining which new products grow in the marketplace (resonance). It is more important that some consumers love you than it is that most consumers like you.

Journal ArticleDOI
TL;DR: It is argued that expertise coordination practices (reliance on protocols, community of practice structuring, plug-and-play teaming, and knowledge sharing) are essential to manage distributed expertise and ensure the timely application of necessary expertise.
Abstract: Organizational coordination has traditionally been viewed from an organizational-design perspective where rules, modalities, and structures are used to meet the information-processing demands of the environment. Fast-response organizations face unique coordination challenges as they operate under conditions of high uncertainty and fast decision making, where mistakes can be catastrophic. Based on an in-depth investigation of the coordination practices of a medical trauma center where fast-response and error-free activities are essential requirements, we develop a coordination-practice perspective that emphasizes expertise coordination and dialogic coordination. We argue that expertise coordination practices (reliance on protocols, community of practice structuring, plug-and-play teaming, and knowledge sharing) are essential to manage distributed expertise and ensure the timely application of necessary expertise. We suggest that dialogic coordination practices (epistemic contestation, joint sensemaking, cross-boundary intervention, and protocol breaking) are time-critical responses to novel events and ensure error-free operation. However, dialogic coordination practices are highly contested because of epistemic differences, reputation stakes, and possible blame apportionment.

Journal ArticleDOI
TL;DR: This article expands the dominant paradigm in cross-cultural research by developing a theory of cultural tightness-looseness (the strength of social norms and the degree of sanctioning within societies) and by advancing a multilevel research agenda for future research.
Abstract: Cross-cultural research is dominated by the use of values despite their mixed empirical support and their limited theoretical scope. This article expands the dominant paradigm in cross-cultural research by developing a theory of cultural tightness-looseness (the strength of social norms and the degree of sanctioning within societies) and by advancing a multilevel research agenda for future research. Through an exploration of the top-down, bottom-up, and moderating impact that cultural tightness-looseness has on individuals and organizations, as well as on variance at multiple levels of analysis, the theory provides a new and complementary perspective to the values approach.

Book
21 Jun 2006
TL;DR: Modern Differential Geometry of Curves and Surfaces with Mathematica explains how to define and compute standard geometric functions, for example the curvature of curves, and presents a dialect ofMathematica for constructing new curves and surfaces from old.
Abstract: From the Publisher: The Second Edition combines a traditional approach with thesymbolic manipulation abilities of Mathematica to explain and develop the classical theory of curves and surfaces. You will learn to reproduce and study interesting curves and surfaces - many more than are included in typical texts - using computer methods. By plotting geometric objects and studying the printed result, teachers and students can understand concepts geometrically and see the effect of changes in parameters. Modern Differential Geometry of Curves and Surfaces with Mathematica explains how to define and compute standard geometric functions, for example the curvature of curves, and presents a dialect of Mathematica for constructing new curves and surfaces from old. The book also explores how to apply techniques from analysis. Although the book makes extensive use of Mathematica, readers without access to that program can perform the calculations in the text by hand. While single- and multi-variable calculus, some linear algebra, and a few concepts of point set topology are needed to understand the theory, no computer or Mathematica skills are required to understand the concepts presented in the text. In fact, it serves as an excellent introduction to Mathematica, and includes fully documented programs written for use with Mathematica. Ideal for both classroom use and self-study, Modern Differential Geometry of Curves and Surfaces with Mathematica has been tested extensively in the classroom and used in professional short courses throughout the world.

Journal ArticleDOI
TL;DR: In this article, the authors use an indirect inference procedure to estimate the structural parameters of a rich speciflcation of capital adjustment costs, which are optimally chosen to reproduce a set of moments that capture the nonlinear relationship between investment and profitability found in plant-level data.
Abstract: This paper studies the nature of capital adjustment at the plant level. We use an indirect inference procedure to estimate the structural parameters of a rich speciflcation of capital adjustment costs. In efiect, the parameters are optimally chosen to reproduce a set of moments that capture the nonlinear relationship between investment and profltability found in plant-level data. Our flndings indicate that a model which