scispace - formally typeset
Search or ask a question

Showing papers by "University of British Columbia published in 2003"


Journal ArticleDOI
TL;DR: In this article, the authors find that the emerging standard model of cosmology, a flat -dominated universe seeded by a nearly scale-invariant adiabatic Gaussian fluctuations, fits the WMAP data.
Abstract: WMAP precision data enable accurate testing of cosmological models. We find that the emerging standard model of cosmology, a flat � -dominated universe seeded by a nearly scale-invariant adiabatic Gaussian fluctuations, fits the WMAP data. For the WMAP data only, the best-fit parameters are h ¼ 0:72 � 0:05, � bh 2 ¼ 0:024 � 0:001, � mh 2 ¼ 0:14 � 0:02, � ¼ 0:166 þ0:076 � 0:071 , ns ¼ 0:99 � 0:04, and � 8 ¼ 0:9 � 0:1. With parameters fixed only by WMAP data, we can fit finer scale cosmic microwave background (CMB) measure- ments and measurements of large-scale structure (galaxy surveys and the Lyforest). This simple model is also consistent with a host of other astronomical measurements: its inferred age of the universe is consistent with stellar ages, the baryon/photon ratio is consistent with measurements of the (D/H) ratio, and the inferred Hubble constant is consistent with local observations of the expansion rate. We then fit the model parameters to a combination of WMAP data with other finer scale CMB experiments (ACBAR and CBI), 2dFGRS measurements, and Lyforest data to find the model's best-fit cosmological parameters: h ¼ 0:71 þ0:04 � 0:03 , � bh 2 ¼ 0:0224 � 0:0009, � mh 2 ¼ 0:135 þ0:008 � 0:009 , � ¼ 0:17 � 0:06, ns(0.05 Mpc � 1 )=0 :93 � 0:03, and � 8 ¼ 0:84 � 0:04. WMAP's best determination of � ¼ 0:17 � 0:04 arises directly from the temperature- polarization (TE) data and not from this model fit, but they are consistent. These parameters imply that the age of the universe is 13:7 � 0:2 Gyr. With the Lyforest data, the model favors but does not require a slowly varying spectral index. The significance of this running index is sensitive to the uncertainties in the Ly� forest. By combining WMAP data with other astronomical data, we constrain the geometry of the universe, � tot ¼ 1:02 � 0:02, and the equation of state of the dark energy, w < � 0:78 (95% confidence limit assuming w �� 1). The combination of WMAP and 2dFGRS data constrains the energy density in stable neutrinos: � � h 2 < 0:0072 (95% confidence limit). For three degenerate neutrino species, this limit implies that their mass is less than 0.23 eV (95% confidence limit). The WMAP detection of early reionization rules out warm dark matter. Subject headings: cosmic microwave background — cosmological parameters — cosmology: observations — early universe On-line material: color figure

10,650 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present full sky microwave maps in five frequency bands (23 to 94 GHz) from the WMAP first year sky survey, which are consistent with the 7 in. full-width at half-maximum (FWHM) Cosmic Background Explorer (COBE) maps.
Abstract: We present full sky microwave maps in five frequency bands (23 to 94 GHz) from the WMAP first year sky survey. Calibration errors are less than 0.5% and the low systematic error level is well specified. The cosmic microwave background (CMB) is separated from the foregrounds using multifrequency data. The sky maps are consistent with the 7 in. full-width at half-maximum (FWHM) Cosmic Background Explorer (COBE) maps. We report more precise, but consistent, dipole and quadrupole values. The CMB anisotropy obeys Gaussian statistics with -58 less than f(sub NL) less than 134 (95% CL). The 2 less than or = l less than or = 900 anisotropy power spectrum is cosmic variance limited for l less than 354 with a signal-to-noise ratio greater than 1 per mode to l = 658. The temperature-polarization cross-power spectrum reveals both acoustic features and a large angle correlation from reionization. The optical depth of reionization is tau = 0.17 +/- 0.04, which implies a reionization epoch of t(sub r) = 180(sup +220, sub -80) Myr (95% CL) after the Big Bang at a redshift of z(sub r) = 20(sup +10, sub -9) (95% CL) for a range of ionization scenarios. This early reionization is incompatible with the presence of a significant warm dark matter density. A best-fit cosmological model to the CMB and other measures of large scale structure works remarkably well with only a few parameters. The age of the best-fit universe is t(sub 0) = 13.7 +/- 0.2 Gyr old. Decoupling was t(sub dec) = 379(sup +8, sub -7)kyr after the Big Bang at a redshift of z(sub dec) = 1089 +/- 1. The thickness of the decoupling surface was Delta(sub z(sub dec)) = 195 +/- 2. The matter density of the universe is Omega(sub m)h(sup 2) = 0.135(sup +0.008, sub -0.009) the baryon density is Omega(sub b)h(sup 2) = 0.0224 +/- 0.0009, and the total mass-energy of the universe is Omega(sub tot) = 1.02 +/- 0.02. There is progressively less fluctuation power on smaller scales, from WMAP to fine scale CMB measurements to galaxies and finally to the Ly-alpha forest. This is accounted for with a running spectral index, significant at the approx. 2(sigma) level. The spectral index of scalar fluctuations is fit as n(sub s) = 0.93 +/-0.03 at wavenumber k(sub o) = 0.05/Mpc ((sub eff) approx. = 700), with a slope of dn(sub s)/d I(sub nk) = -0.031(sup + 0.016, sub -0.018) in the best-fit model.

4,821 citations


Journal ArticleDOI
Abstract: We present full sky microwave maps in five bands (23 to 94 GHz) from the WMAP first year sky survey. Calibration errors are 1 per mode to l=658. The temperature-polarization cross-power spectrum reveals both acoustic features and a large angle correlation from reionization. The optical depth of reionization is 0.17 +/- 0.04, which implies a reionization epoch of 180+220-80 Myr (95% CL) after the Big Bang at a redshift of 20+10-9 (95% CL) for a range of ionization scenarios. This early reionization is incompatible with the presence of a significant warm dark matter density. The age of the best-fit universe is 13.7 +/- 0.2 Gyr old. Decoupling was 379+8-7 kyr after the Big Bang at a redshift of 1089 +/- 1. The thickness of the decoupling surface was dz=195 +/- 2. The matter density is Omega_m h^2 = 0.135 +0.008 -0.009, the baryon density is Omega_b h^2 = 0.0224 +/- 0.0009, and the total mass-energy of the universe is Omega_tot = 1.02 +/- 0.02. The spectral index of scalar fluctuations is fit as n_s = 0.93 +/- 0.03 at wavenumber k_0 = 0.05 Mpc^-1, with a running index slope of dn_s/d ln k = -0.031 +0.016 -0.018 in the best-fit model. This flat universe model is composed of 4.4% baryons, 22% dark matter and 73% dark energy. The dark energy equation of state is limited to w<-0.78 (95% CL). Inflation theory is supported with n_s~1, Omega_tot~1, Gaussian random phases of the CMB anisotropy, and superhorizon fluctuations. An admixture of isocurvature modes does not improve the fit. The tensor-to-scalar ratio is r(k_0=0.002 Mpc^-1)<0.90 (95% CL).

3,868 citations


Journal ArticleDOI
TL;DR: Imatinib was superior to interferon alfa plus low-dose cytarabine as first-line therapy in newly diagnosed chronic-phase CML and was better tolerated than combination therapy.
Abstract: Background Imatinib, a selective inhibitor of the BCR-ABL tyrosine kinase, produces high response rates in patients with chronic-phase chronic myeloid leukemia (CML) who have had no response to interferon alfa. We compared the efficacy of imatinib with that of interferon alfa combined with low-dose cytarabine in newly diagnosed chronic-phase CML. Methods We randomly assigned 1106 patients to receive imatinib (553 patients) or interferon alfa plus low-dose cytarabine (553 patients). Crossover to the alternative group was allowed if stringent criteria defining treatment failure or intolerance were met. Patients were evaluated for hematologic and cytogenetic responses, toxic effects, and rates of progression. Results After a median follow-up of 19 months, the estimated rate of a major cytogenetic response (0 to 35 percent of cells in metaphase positive for the Philadelphia chromosome) at 18 months was 87.1 percent (95 percent confidence interval, 84.1 to 90.0) in the imatinib group and 34.7 percent (95 perce...

3,399 citations


Journal ArticleDOI
TL;DR: A unified approach to the coder control of video coding standards such as MPEG-2, H.263, MPEG-4, and the draft video coding standard H.264/AVC (advanced video coding) is presented.
Abstract: A unified approach to the coder control of video coding standards such as MPEG-2, H.263, MPEG-4, and the draft video coding standard H.264/AVC (advanced video coding) is presented. The performance of the various standards is compared by means of PSNR and subjective testing results. The results indicate that H.264/AVC compliant encoders typically achieve essentially the same reproduction quality as encoders that are compliant with the previous standards while typically requiring 60% or less of the bit rate.

3,312 citations


Journal ArticleDOI
TL;DR: This article found that self-esteem does not predict the quality or duration of relationships, nor does it predict the likelihood of cheating and bullying in children, and the highest and lowest rates of cheating were found in different subcategories of high selfesteem.
Abstract: Self-esteem has become a household word. Teachers, parents, therapists, and others have focused efforts on boosting self-esteem, on the assumption that high self-esteem will cause many positive outcomes and benefits-an assumption that is critically evaluated in this review. Appraisal of the effects of self-esteem is complicated by several factors. Because many people with high self-esteem exaggerate their successes and good traits, we emphasize objective measures of outcomes. High self-esteem is also a heterogeneous category, encompassing people who frankly accept their good qualities along with narcissistic, defensive, and conceited individuals. The modest correlations between self-esteem and school performance do not indicate that high self-esteem leads to good performance. Instead, high self-esteem is partly the result of good school performance. Efforts to boost the self-esteem of pupils have not been shown to improve academic performance and may sometimes be counterproductive. Job performance in adults is sometimes related to self-esteem, although the correlations vary widely, and the direction of causality has not been established. Occupational success may boost self-esteem rather than the reverse. Alternatively, self-esteem may be helpful only in some job contexts. Laboratory studies have generally failed to find that self-esteem causes good task performance, with the important exception that high self-esteem facilitates persistence after failure. People high in self-esteem claim to be more likable and attractive, to have better relationships, and to make better impressions on others than people with low self-esteem, but objective measures disconfirm most of these beliefs. Narcissists are charming at first but tend to alienate others eventually. Self-esteem has not been shown to predict the quality or duration of relationships. High self-esteem makes people more willing to speak up in groups and to criticize the group's approach. Leadership does not stem directly from self-esteem, but self-esteem may have indirect effects. Relative to people with low self-esteem, those with high self-esteem show stronger in-group favoritism, which may increase prejudice and discrimination. Neither high nor low self-esteem is a direct cause of violence. Narcissism leads to increased aggression in retaliation for wounded pride. Low self-esteem may contribute to externalizing behavior and delinquency, although some studies have found that there are no effects or that the effect of self-esteem vanishes when other variables are controlled. The highest and lowest rates of cheating and bullying are found in different subcategories of high self-esteem. Self-esteem has a strong relation to happiness. Although the research has not clearly established causation, we are persuaded that high self-esteem does lead to greater happiness. Low self-esteem is more likely than high to lead to depression under some circumstances. Some studies support the buffer hypothesis, which is that high self-esteem mitigates the effects of stress, but other studies come to the opposite conclusion, indicating that the negative effects of low self-esteem are mainly felt in good times. Still others find that high self-esteem leads to happier outcomes regardless of stress or other circumstances. High self-esteem does not prevent children from smoking, drinking, taking drugs, or engaging in early sex. If anything, high self-esteem fosters experimentation, which may increase early sexual activity or drinking, but in general effects of self-esteem are negligible. One important exception is that high self-esteem reduces the chances of bulimia in females. Overall, the benefits of high self-esteem fall into two categories: enhanced initiative and pleasant feelings. We have not found evidence that boosting self-esteem (by therapeutic interventions or school programs) causes benefits. Our findings do not support continued widespread efforts to boost self-esteem in the hope that it will by itself foster improved outcomes. In view of the heterogeneity of high self-esteem, indiscriminate praise might just as easily promote narcissism, with its less desirable consequences. Instead, we recommend using praise to boost self-esteem as a reward for socially desirable behavior and self-improvement.

3,262 citations


Journal ArticleDOI
TL;DR: This purpose of this introductory paper is to introduce the Monte Carlo method with emphasis on probabilistic machine learning and review the main building blocks of modern Markov chain Monte Carlo simulation.
Abstract: This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.

2,579 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the use of thermal remote sensing in the study of urban climates, focusing primarily on the urban heat island effect and progress made towards answering the methodological questions posed by Roth et al.

2,013 citations


Journal ArticleDOI
TL;DR: In this paper, the authors test the pecking order theory of corporate leverage on a broad cross-section of publicly traded American firms for 1971 to 1998 and find that net equity issues track the financing deficit more closely than do net debt issues.

1,783 citations


Posted Content
TL;DR: For the last ten years environmentalists and the trade policy community have engaged in a heated debate over the environmental consequences of liberalized trade as mentioned in this paper, which has been hampered by the lack of a common language and also suffered from little recourse to economic theory and empirical evidence.
Abstract: For the last ten years environmentalists and the trade policy community have engaged in a heated debate over the environmental consequences of liberalized trade. The debate was originally fueled by negotiations over the North American Free Trade Agreement and the Uruguay round of GATT negotiations, both of which occurred at a time when concerns over global warming, species extinction and industrial pollution were rising. Recently it has been intensified by the creation of the World Trade Organization (WTO) and proposals for future rounds of trade negotiations. The debate has often been unproductive. It has been hampered by the lack of a common language and also suffered from little recourse to economic theory and empirical evidence. The purpose of this essay is set out what we currently know about the environmental consequences of economic growth and international trade. We critically review both theory and empirical work to answer three basic questions. What do we know about the relationship between international trade, economic growth and the environment? How can this evidence help us evaluate ongoing policy debates? Where do we go from here?

1,731 citations


Journal ArticleDOI
TL;DR: A new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text, is presented, and a number of models for the joint distribution of image regions and words are developed, including several which explicitly learn the correspondence between regions and Words.
Abstract: We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann's hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.

Journal ArticleDOI
TL;DR: The findings indicate that organizations are embedded in institutional networks and call for greater attention to be directed at understanding institutional pressures when investigating information technology innovations adoption.
Abstract: This study used institutional theory as a lens to understand the factors that enable the adoption of interorganizational systems. It posits that mimetic, coercive, and normative pressures existing in an institutionalized environment could influence organizational predisposition toward an information technology-based interorganizational linkage. Survey-based research was carried out to test this theory. Following questionnaire development, validation, and pretest with a pilot study, data were collected from the CEO, the CFO, and the CIO to measure the institutional pressures they faced and their intentions to adopt financial electronic data interchange (FEDI). A firm-level structural model was developed based on the CEO's, the CFO's, and the CIO's data. LISREL and PLS were used for testing the measurement and structural models respectively. Results showed that all three institutional pressures-mimetic pressures, coercive pressures, and normative pressures-had a significant influence on organizational intention to adopt FEDI. Except for perceived extent of adoption among suppliers, all other subconstructs were significant in the model. These results provide strong support for institutional-based variables as predictors of adoption intention for interorganizational linkages. These findings indicate that organizations are embedded in institutional networks and call for greater attention to be directed at understanding institutional pressures when investigating information technology innovations adoption.

Journal ArticleDOI
TL;DR: This commentary begins by discussing why establishing an identity for the IS field is important, and describes what such an identity may look like by proposing a core set of properties, i.e., concepts and phenomena, that define theIS field.
Abstract: We are concerned that the IS research community is making the discipline's central identity ambiguous by, all too frequently, under-investigating phenomena intimately associated with IT-based systems and over-investigating phenomena distantly associated with IT-based systems. In this commentary, we begin by discussing why establishing an identity for the IS field is important. We then describe what such an identity may look like by proposing a core set of properties, i.e., concepts and phenomena, that define the IS field. Next, we discuss research by IS scholars that either fails to address this core set of properties (labeled as error of exclusion) or that addresses concepts/phenomena falling outside this core set (labeled as error of inclusion). We conclude by offering suggestions for redirecting IS scholarship toward the concepts and phenomena that we argue define the core of the IS discipline.

Journal ArticleDOI
TL;DR: Fetal nigral transplantation currently cannot be recommended as a therapy for PD based on results, and Stratification based on disease severity showed a treatment effect in milder patients.
Abstract: Thirty-four patients with advanced Parkinson's disease participated in a prospective 24-month double-blind, placebo-controlled trial of fetal nigral transplantation. Patients were randomized to receive bilateral transplantation with one or four donors per side or a placebo procedure. The primary end point was change between baseline and final visits in motor component of the Unified Parkinson's Disease Rating Scale in the practically defined off state. There was no significant overall treatment effect (p = 0.244). Patients in the placebo and one-donor groups deteriorated by 9.4 +/- 4.25 and 3.5 +/- 4.23 points, respectively, whereas those in the four-donor group improved by 0.72 +/- 4.05 points. Pairwise comparisons were not significant, although the four-donor versus placebo groups yielded a p value of 0.096. Stratification based on disease severity showed a treatment effect in milder patients (p = 0.006). Striatal fluorodopa uptake was significantly increased after transplantation in both groups and robust survival of dopamine neurons was observed at postmortem examination. Fifty-six percent of transplanted patients developed dyskinesia that persisted after overnight withdrawal of dopaminergic medication ("off"-medication dyskinesia). Fetal nigral transplantation currently cannot be recommended as a therapy for PD based on these results.

Journal ArticleDOI
TL;DR: The authors examine the interface between for-profit and publicly funded pharmaceuticals, finding that 'Connectedness' is significantly correlated with firms' internal organization, as well as their performance in drug discovery.
Abstract: We examine the interface between for-profit and publicly funded research in pharmaceuticals. Firms access upstream basic research through investments in absorptive capacity in the form of in-house basic research and ‘pro-publication’ internal incentives. Some firms also maintain extensive connections to the wider scientific community, which we measure using data on coauthorship of scientific papers between pharmaceutical company scientists and publicly funded researchers. ‘Connectedness’ is significantly correlated with firms’ internal organization, as well as their performance in drug discovery. The estimated impact of ‘connectedness’ on private research productivity implies a substantial return to public investments in basic research.

Journal ArticleDOI
TL;DR: SARS appears to be of viral origin, with patterns suggesting droplet or contact transmission, and the role of human metapneumovirus, a novel coronavirus, or both requires further investigation.
Abstract: background Severe acute respiratory syndrome (SARS) is a condition of unknown cause that has recently been recognized in patients in Asia, North America, and Europe. This report summarizes the initial epidemiologic findings, clinical description, and diagnostic findings that followed the identification of SARS in Canada. methods SARS was first identified in Canada in early March 2003. We collected epidemiologic, clinical, and diagnostic data from each of the first 10 cases prospectively as they were identified. Specimens from all cases were sent to local, provincial, national, and international laboratories for studies to identify an etiologic agent. results The patients ranged from 24 to 78 years old; 60 percent were men. Transmission occurred only after close contact. The most common presenting symptoms were fever (in 100 percent of cases) and malaise (in 70 percent), followed by nonproductive cough (in 100 percent) and dyspnea (in 80 percent) associated with infiltrates on chest radiography (in 100 percent). Lymphopenia (in 89 percent of those for whom data were available), elevated lactate dehydrogenase levels (in 80 percent), elevated aspartate aminotransferase levels (in 78 percent), and elevated creatinine kinase levels (in 56 percent) were common. Empirical therapy most commonly included antibiotics, oseltamivir, and intravenous ribavirin. Mechanical ventilation was required in five patients. Three patients died, and five have had clinical improvement. The results of laboratory investigations were negative or not clinically significant except for the amplification of human metapneumovirus from respiratory specimens from five of nine patients and the isolation and amplification of a novel coronavirus from five of nine patients. In four cases both pathogens were isolated. conclusions SARS is a condition associated with substantial morbidity and mortality. It appears to be of viral origin, with patterns suggesting droplet or contact transmission. The role of human metapneumovirus, a novel coronavirus, or both requires further investigation.

Journal ArticleDOI
TL;DR: In this article, the authors used the Wilkinson Microwave Anisotropy Probe (WMAP) data, in combination with complementary small-scale cosmic microwave background (CMB) measurements and large-scale structure data.
Abstract: We confront predictions of inflationary scenarios with the Wilkinson Microwave Anisotropy Probe (WMAP) data, in combination with complementary small-scale cosmic microwave background (CMB) measurements and large-scale structure data. The WMAP detection of a large-angle anticorrelation in the temperature-polarization cross-power spectrum is the signature of adiabatic superhorizon fluctuations at the time of decoupling. The WMAP data are described by pure adiabatic fluctuations: we place an upper limit on a correlated cold dark matter (CDM) isocurvature component. Using WMAP constraints on the shape of the scalar power spectrum and the amplitude of gravity waves, we explore the parameter space of inflationary models that is consistent with the data. We place limits on inflationary models; for example, a minimally coupled λ4 is disfavored at more than 3 σ using WMAP data in combination with smaller scale CMB and large-scale structure survey data. The limits on the primordial parameters using WMAP data alone are ns(k0 = 0.002 Mpc-1) = 1.20, dns/d ln k = -0.077, A(k0 = 0.002 Mpc-1) = 0.71 (68% CL), and r(k0 = 0.002 Mpc-1) < 1.28 (95% CL).

Journal ArticleDOI
TL;DR: In this article, international trends and differences in subjective well-being over the final five decades of the twentieth century are discussed. But the main innovation of this paper lies in its use of large international samples of individual respondents, thus permitting the simultaneous identification of individual-level and societal-level determinants of wellbeing.

Journal ArticleDOI
TL;DR: Gene expression profiling strongly supported a relationship between PMBL and Hodgkin lymphoma: over one third of the genes that were more highly expressed in PMBL than in other DLBCLs were also characteristically expressed in Hodgkinymphoma cells.
Abstract: Using current diagnostic criteria, primary mediastinal B cell lymphoma (PMBL) cannot be distinguished from other types of diffuse large B cell lymphoma (DLBCL) reliably. We used gene expression profiling to develop a more precise molecular diagnosis of PMBL. PMBL patients were considerably younger than other DLBCL patients, and their lymphomas frequently involved other thoracic structures but not extrathoracic sites typical of other DLBCLs. PMBL patients had a relatively favorable clinical outcome, with a 5-yr survival rate of 64% compared with 46% for other DLBCL patients. Gene expression profiling strongly supported a relationship between PMBL and Hodgkin lymphoma: over one third of the genes that were more highly expressed in PMBL than in other DLBCLs were also characteristically expressed in Hodgkin lymphoma cells. PDL2, which encodes a regulator of T cell activation, was the gene that best discriminated PMBL from other DLBCLs and was also highly expressed in Hodgkin lymphoma cells. The genomic loci for PDL2 and several neighboring genes were amplified in over half of the PMBLs and in Hodgkin lymphoma cell lines. The molecular diagnosis of PMBL should significantly aid in the development of therapies tailored to this clinically and pathogenetically distinctive subgroup of DLBCL.

Journal ArticleDOI
TL;DR: Several different erosion and sediment and sediment-associated nutrient transport models with regard to these factors are reviewed, limited to those models with explicit considerations of either the sediment generation or transport process.
Abstract: Information on sediment and nutrient export from catchments and about related erosive processes is required by catchment managers and decision-makers. Many models exist for the consideration of these processes. However, these models differ greatly in terms of their complexity, their inputs and requirements, the processes they represent and the manner in which these processes are represented, the scale of their intended use and the types of output information they provide. This paper reviews several different erosion and sediment and sediment-associated nutrient transport models with regard to these factors. The review of models is limited to those models with explicit considerations of either the sediment generation or transport process.

Journal ArticleDOI
TL;DR: In this article, the authors used a maximum entropy method to construct a model of the Galactic emission components and showed that the model is accurate to less than 1% and individual model components are accurate to a few percent.
Abstract: The WMAP mission has mapped the full sky to determine the geometry, content, and evolution of the universe. Full sky maps are made in five microwave frequency bands to separate the temperature anisotropy of the cosmic microwave background (CMB) from foreground emission, including diffuse Galactic emission and Galactic and extragalactic point sources. We define masks that excise regions of high foreground emission, so CMB analyses can became out with minimal foreground contamination. We also present maps and spectra of the individual emission components, leading to an improved understanding of Galactic astrophysical processes. The effectiveness of template fits to remove foreground emission from the WMAP data is also examined. These efforts result in a CMB map with minimal contamination and a demonstration that the WMAP CMB power spectrum is insensitive to residual foreground emission. We use a Maximum Entropy Method to construct a model of the Galactic emission components. The observed total Galactic emission matches the model to less than 1% and the individual model components are accurate to a few percent. We find that the Milky Way resembles other normal spiral galaxies between 408 MHz and 23 GHz, with a synchrotron spectral index that is flattest (beta(sub s) approx. -2.5) near star-forming regions, especially in the plane, and steepest (beta(sub s) approx. -3) in the halo. This is consistent with a picture of relativistic cosmic ray electron generation in star-forming regions and diffusion and convection within the plane. The significant synchrotron index steepening out of the plane suggests a diffusion process in which the halo electrons are trapped in the Galactic potential long enough to suffer synchrotron and inverse Compton energy losses and hence a spectral steepening. The synchrotron index is steeper in the WMAP bands than in lower frequency radio surveys, with a spectral break near 20 GHz to beta(sub s) less than -3. The modeled thermal dust spectral index is also steep in the WMAP bands, with beta(sub d) approx. = 2.2. Our model is driven to these conclusions by the low level of total foreground contamination at approx. 60 GHz. Microwave and Ha measurements of the ionized gas agree well with one another at about the expected levels. Spinning dust emission is limited to less than 5% of the Ka-band foreground emission. A catalog of 208 point sources is presented. The reliability of the catalog is 98%, i.e., we expect five of the 208 sources to be statistically spurious. The mean spectral index of the point sources is alpha approx. 0(beta approx. -2). Derived source counts suggest a contribution to the anisotropy power from unresolved sources of (15.0 +/- 1.4) x 10(exp -3)micro sq K sr at Q-band and negligible levels at V-band and W-band. The Sunyaev-Zeldovich effect is shown to be a negligible "contamination" to the maps.

Journal ArticleDOI
TL;DR: A quantitative model of the aberrant cell cycle regulation in MCL is proposed that provides a rationale for the design of cell cycle inhibitor therapy in this malignancy.

Journal ArticleDOI
TL;DR: It is concluded that wild-type huntingtin acts in the cytoplasm of neurons to regulate the availability of REST/NRSF to its nuclear NRSE-binding site and that this control is lost in the pathology of Huntington disease.
Abstract: Huntingtin protein is mutated in Huntington disease. We previously reported that wild-type but not mutant huntingtin stimulates transcription of the gene encoding brain-derived neurotrophic factor (BDNF; ref. 2). Here we show that the neuron restrictive silencer element (NRSE) is the target of wild-type huntingtin activity on BDNF promoter II. Wild-type huntingtin inhibits the silencing activity of NRSE, increasing transcription of BDNF. We show that this effect occurs through cytoplasmic sequestering of repressor element-1 transcription factor/neuron restrictive silencer factor (REST/NRSF), the transcription factor that binds to NRSE. In contrast, aberrant accumulation of REST/NRSF in the nucleus is present in Huntington disease. We show that wild-type huntingtin coimmunoprecipitates with REST/NRSF and that less immunoprecipitated material is found in brain tissue with Huntington disease. We also report that wild-type huntingtin acts as a positive transcriptional regulator for other NRSE-containing genes involved in the maintenance of the neuronal phenotype. Consistently, loss of expression of NRSE-controlled neuronal genes is shown in cells, mice and human brain with Huntington disease. We conclude that wild-type huntingtin acts in the cytoplasm of neurons to regulate the availability of REST/NRSF to its nuclear NRSE-binding site and that this control is lost in the pathology of Huntington disease. These data identify a new mechanism by which mutation of huntingtin causes loss of transcription of neuronal genes.

Journal ArticleDOI
TL;DR: This review summarizes the recent evidence that synchronous neural oscillations reveal much about the origin and nature of cognitive processes such as memory, attention and consciousness.

Journal ArticleDOI
TL;DR: The availability of the human and mouse genome sequences has allowed the identification and comparison of their respective degradomes — the complete repertoire of proteases that are produced by these organisms.
Abstract: The availability of the human and mouse genome sequences has allowed the identification and comparison of their respective degradomes--the complete repertoire of proteases that are produced by these organisms. Because of the essential roles of proteolytic enzymes in the control of cell behaviour, survival and death, degradome analysis provides a useful framework for the global exploration of these protease-mediated functions in normal and pathological conditions.

Journal ArticleDOI
TL;DR: The addition of enfuvirtide to an optimized antiretroviral and immunologic benefit through 24 weeks in patients who had previously received multiple antireTroviral drugs and had multidrug-resistant HIV-1 infection.
Abstract: Background The T-20 vs. Optimized Regimen Only Study 1 (TORO 1) was a randomized, open-label, phase 3 study of enfuvirtide (T-20), a human immunodeficiency virus type 1 (HIV-1) fusion inhibitor. Methods Patients from 48 sites in the United States, Canada, Mexico, and Brazil with at least six months of previous treatment with agents in three classes of antiretroviral drugs, resistance to drugs in these classes, or both, and with at least 5000 copies of HIV-1 RNA per milliliter of plasma were randomly assigned in a 2:1 ratio to receive enfuvirtide plus an optimized background regimen of three to five antiretroviral drugs or such a regimen alone (control group). The primary efficacy end point was the change in the plasma HIV-1 RNA level from base line to week 24. Results A total of 501 patients underwent randomization, and 491 received at least one dose of study drug and had at least one measurement of plasma HIV-1 RNA after treatment began. The two groups were balanced in terms of the median base-line HIV-1...

Journal ArticleDOI
TL;DR: The Wilkinson Microwave Anisotropy Probe (WMAP) has mapped the full sky in Stokes I, Q, and U parameters at frequencies of 23, 33, 41, 61, and 94 GHz as mentioned in this paper.
Abstract: The Wilkinson Microwave Anisotropy Probe (WMAP) has mapped the full sky in Stokes I, Q, and U parameters at frequencies of 23, 33, 41, 61, and 94 GHz. We detect correlations between the temperature and polarization maps significant at more than 10 σ. The correlations are inconsistent with instrument noise and are significantly larger than the upper limits established for potential systematic errors. The correlations are present in all WMAP frequency bands with similar amplitude from 23 to 94 GHz and are consistent with a superposition of a cosmic microwave background (CMB) signal with a weak foreground. The fitted CMB component is robust against different data combinations and fitting techniques. On small angular scales (θ 20 agree well with the signal predicted solely from the temperature power spectra, with no additional free parameters. We detect excess power on large angular scales (θ > 10°) compared to predictions based on the temperature power spectra alone. The excess power is well described by reionization at redshift 11 < zr < 30 at 95% confidence, depending on the ionization history. A model-independent fit to reionization optical depth yields results consistent with the best-fit Λ-dominated cold dark matter model, with best-fit value τ = 0.17 ± 0.04 at 68% confidence, including systematic and foreground uncertainties. This value is larger than expected given the detection of a Gunn-Peterson trough in the absorption spectra of distant quasars and implies that the universe has a complex ionization history: WMAP has detected the signal from an early epoch of reionization.

Journal ArticleDOI
01 Nov 2003-Peptides
TL;DR: The role of membrane lipid composition, specifically non-bilayer lipids, on peptide activity will also be discussed, and structure-activity studies of these peptides reveal two main requirements for antimicrobial activity, a cationic charge and an induced amphipathic conformation.

Journal ArticleDOI
TL;DR: Biochemical-genetic and genomic approaches in Arabidopsis thaliana promise to be particularly useful in identifying and characterizing gene products involved in wax biosynthesis, secretion and function, and the current review will, therefore, focus onArabidopsis as a model for studying these processes.

Journal ArticleDOI
TL;DR: In this article, the authors examined the optimal monetary policy under commitment, focusing on the nature of price adjustment in determining policy and found that the optimal policy leads to a fixed exchange rate, even in the presence of country-specific shocks.
Abstract: This paper develops a welfare-based model of monetary policy in an open economy. We examine the optimal monetary policy under commitment, focusing on the nature of price adjustment in determining policy. We investigate the implications of these policies for exchange-rate flexibility. The traditional approach maintains that exchange-rate flexibility is desirable in the presence of real country-specific shocks that require adjustment in relative prices. However, in the light of empirical evidence on nominal price response to exchange-rate changes—specifically, that there appears to be a large degree of localcurrency pricing (LCP) in industrialized countries—the expenditure-switching role played by nominal exchange rates may be exaggerated in the traditional literature. In the presence of LCP, we find that the optimal monetary policy leads to a fixed exchange rate, even in the presence of country-specific shocks. This is true whether monetary policy is chosen cooperatively or non-cooperatively among countries. To what extent does independent monetary policy in an open economy require flexibility of the nominal exchange rate? The modern case for flexible exchange rates goes back to Friedman (1953). Real country-specific productivity or demand shocks require adjustment of relative price levels between countries. If nominal prices adjusted quickly, Friedman argues, the choice of exchange-rate regime would be irrelevant because the relative price adjustment could be achieved by nominal price changes: If internal prices were as flexible as exchange rates, it would make little economic difference whether adjustments were brought about by changes in exchange rates or by equivalent changes in internal prices. But this condition is clearly not fulfilled. The exchange rate is potentially flexible in the absence of administrative action to freeze it. At least in the modern world, internal prices are highly inflexible.