scispace - formally typeset
Search or ask a question

Showing papers by "University of California, San Diego published in 2001"


Book ChapterDOI
01 Jan 2001
TL;DR: In this article, it is shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation, and measures of causal lag and causal strength can then be constructed.
Abstract: There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recordhag information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalization of this result with the partial cross spectrum is suggested.The object of this paper is to throw light on the relationships between certain classes of econometric models involving feedback and the functions arising in spectral analysis, particularly the cross spectrum and the partial cross spectrum. Causality and feedback are here defined in an explicit and testable fashion. It is shown that in the two-variable case the feedback mechanism can be broken down into two causal relations and that the cross spectrum can be considered as the sum of two cross spectra, each closely connected with one of the causations. The next three sections of the paper briefly introduce those aspects of spectral methods, model building, and causality which are required later. Section IV presents the results for the two-variable case and Section V generalizes these results for three variables.

11,896 citations


Journal ArticleDOI

9,849 citations


Journal ArticleDOI
06 Apr 2001-Science
TL;DR: These experiments directly confirm the predictions of Maxwell's equations that n is given by the negative square root ofɛ·μ for the frequencies where both the permittivity and the permeability are negative.
Abstract: We present experimental scattering data at microwave frequencies on a structured metamaterial that exhibits a frequency band where the effective index of refraction (n) is negative. The material consists of a two-dimensional array of repeated unit cells of copper strips and split ring resonators on interlocking strips of standard circuit board material. By measuring the scattering angle of the transmitted beam through a prism fabricated from this material, we determine the effective n, appropriate to Snell's law. These experiments directly confirm the predictions of Maxwell's equations that n is given by the negative square root of epsilon.mu for the frequencies where both the permittivity (epsilon) and the permeability (mu) are negative. Configurations of geometrical optical designs are now possible that could not be realized by positive index materials.

8,477 citations


Journal ArticleDOI
TL;DR: The application of numerical methods are presented to enable the trivially parallel solution of the Poisson-Boltzmann equation for supramolecular structures that are orders of magnitude larger in size.
Abstract: Evaluation of the electrostatic properties of biomolecules has become a standard practice in molecular biophysics. Foremost among the models used to elucidate the electrostatic potential is the Poisson-Boltzmann equation; however, existing methods for solving this equation have limited the scope of accurate electrostatic calculations to relatively small biomolecular systems. Here we present the application of numerical methods to enable the trivially parallel solution of the Poisson-Boltzmann equation for supramolecular structures that are orders of magnitude larger in size. As a demonstration of this methodology, electrostatic potentials have been calculated for large microtubule and ribosome structures. The results point to the likely role of electrostatics in a variety of activities of these structures.

6,918 citations


Journal ArticleDOI
27 Jul 2001-Science
TL;DR: Paleoecological, archaeological, and historical data show that time lags of decades to centuries occurred between the onset of overfishing and consequent changes in ecological communities, because unfished species of similar trophic level assumed the ecological roles of over-fished species until they too were overfished or died of epidemic diseases related to overcrowding as mentioned in this paper.
Abstract: Ecological extinction caused by overfishing precedes all other pervasive human disturbance to coastal ecosystems, including pollution, degradation of water quality, and anthropogenic climate change. Historical abundances of large consumer species were fantastically large in comparison with recent observations. Paleoecological, archaeological, and historical data show that time lags of decades to centuries occurred between the onset of overfishing and consequent changes in ecological communities, because unfished species of similar trophic level assumed the ecological roles of overfished species until they too were overfished or died of epidemic diseases related to overcrowding. Retrospective data not only help to clarify underlying causes and rates of ecological change, but they also demonstrate achievable goals for restoration and management of coastal ecosystems that could not even be contemplated based on the limited perspective of recent observations alone.

5,411 citations


Journal ArticleDOI
01 Mar 2001-Nature
TL;DR: Recent studies have begun to shed light on the physiological functions of MAPK cascades in the control of gene expression, cell proliferation and programmed cell death.
Abstract: Mitogen-activated protein kinases (MAPKs) are important signal transducing enzymes, unique to eukaryotes, that are involved in many facets of cellular regulation. Initial research concentrated on defining the components and organization of MAPK signalling cascades, but recent studies have begun to shed light on the physiological functions of these cascades in the control of gene expression, cell proliferation and programmed cell death.

4,973 citations


Journal ArticleDOI
TL;DR: A group of experts on aging and MCI from around the world in the fields of neurology, psychiatry, geriatrics, neuropsychology, neuroimaging, neuropathology, clinical trials, and ethics was convened to summarize the current state of the field of MCI.
Abstract: The field of aging and dementia is focusing on the characterization of the earliest stages of cognitive impairment. Recent research has identified a transitional state between the cognitive changes of normal aging and Alzheimer's disease (AD), known as mild cognitive impairment (MCI). Mild cognitive impairment refers to the clinical condition between normal aging and AD in which persons experience memory loss to a greater extent than one would expect for age, yet they do not meet currently accepted criteria for clinically probable AD. When these persons are observed longitudinally, they progress to clinically probable AD at a considerably accelerated rate compared with healthy age-matched individuals. Consequently, this condition has been recognized as suitable for possible therapeutic intervention, and several multicenter international treatment trials are under way. Because this is a topic of intense interest, a group of experts on aging and MCI from around the world in the fields of neurology, psychiatry, geriatrics, neuropsychology, neuroimaging, neuropathology, clinical trials, and ethics was convened to summarize the current state of the field of MCI. Participants reviewed the world scientific literature on aging and MCI and summarized the various topics with respect to available evidence on MCI. Diagnostic criteria and clinical outcomes of these subjects are available in the literature. Mild cognitive impairment is believed to be a high-risk condition for the development of clinically probable AD. Heterogeneity in the use of the term was recognized, and subclassifications were suggested. While no treatments are recommended for MCI currently, clinical trials regarding potential therapies are under way. Recommendations concerning ethical issues in the diagnosis and the management of subjects with MCI were made.

4,424 citations


Journal ArticleDOI
07 Dec 2001-Science
TL;DR: Human activities are releasing tiny particles (aerosols) into the atmosphere that enhance scattering and absorption of solar radiation, which can lead to a weaker hydrological cycle, which connects directly to availability and quality of fresh water, a major environmental issue of the 21st century.
Abstract: Human activities are releasing tiny particles (aerosols) into the atmosphere. These human-made aerosols enhance scattering and absorption of solar radiation. They also produce brighter clouds that are less efficient at releasing precipitation. These in turn lead to large reductions in the amount of solar irradiance reaching Earth's surface, a corresponding increase in solar heating of the atmosphere, changes in the atmospheric temperature structure, suppression of rainfall, and less efficient removal of pollutants. These aerosol effects can lead to a weaker hydrological cycle, which connects directly to availability and quality of fresh water, a major environmental issue of the 21st century.

3,469 citations


Journal ArticleDOI
23 Feb 2001-Cell
TL;DR: Elevated levels of serum cholesterol are probably unique through the hepatic LDL receptor pathway, as evi-in being sufficient to drive the development of athero-denced by the fact that lack of functional LDL receptors sclerosis in humans and experimental animals, even in is responsible for the massive accumulation of LDL in the absence of other known risk factors.

2,995 citations


Journal ArticleDOI
TL;DR: The idea underlying cointegration allows specification of models that capture part of such beliefs, at least for a particular type of variable that is frequently found to occur in macroeconomics.
Abstract: At the least sophisticated level of economic theory lies the belief that certain pairs of economic variables should not diverge from each other by too great an extent, at least in the long-run. Thus, such variables may drift apart in the short-run or according to seasonal factors, but if they continue to be too far apart in the long-run, then economic forces, such as a market mechanism or government intervention, will begin to bring them together again. Examples of such variables are interest rates on assets of different maturities, prices of a commodity in different parts ofthe country, income and expenditure by local government and the value of sales and production costs of an industry. Other possible examples would be prices and wages, imports and exports, market prices of substitute commodities, money supply and prices and spot and future prices of a commodity. In some cases an economic theory involving equilibrium concepts might suggest close relations in the long-run, possibly with the addition of yet further variables. However, in each case the correctness of the beliefs about long-term relatedness is an empirical question. The idea underlying cointegration allows specification of models that capture part of such beliefs, at least for a particular type of variable that is frequently found to occur in macroeconomics. Since a concept such as the long-run is a dynamic one, the natural area for these ideas is that of time-series theory and analysis. It is thus necessary to start by introducing some relevant time series models. Consider a single series Xf, measured at equal intervals of time. Time series theory starts by considering the generating mechanism for the series. This mechanism should be able to generate att of the statistical properties of the series, or at very least the conditional mean, variance and temporal autocorrelations, that is the 'linear properties' of the series, conditional on past data. Some series appear to be 'stationary', which essentially implies that the linear properties exist and are timeinvariant. Here we are concerned with the weaker but more technical

2,457 citations


Journal ArticleDOI
19 Sep 2001-JAMA
TL;DR: The results demonstrate that underdiagnosis of PAD in primary care practice may be a barrier to effective secondary prevention of the high ischemic cardiovascular risk associated with PAD.
Abstract: ContextPeripheral arterial disease (PAD) is a manifestation of systemic atherosclerosis that is common and is associated with an increased risk of death and ischemic events, yet may be underdiagnosed in primary care practice.ObjectiveTo assess the feasibility of detecting PAD in primary care clinics, patient and physician awareness of PAD, and intensity of risk factor treatment and use of antiplatelet therapies in primary care clinics.Design and SettingThe PAD Awareness, Risk, and Treatment: New Resources for Survival (PARTNERS) program, a multicenter, cross-sectional study conducted at 27 sites in 25 cities and 350 primary care practices throughout the United States in June-October 1999.PatientsA total of 6979 patients aged 70 years or older or aged 50 through 69 years with history of cigarette smoking or diabetes were evaluated by history and by measurement of the ankle-brachial index (ABI). PAD was considered present if the ABI was 0.90 or less, if it was documented in the medical record, or if there was a history of limb revascularization. Cardiovascular disease (CVD) was defined as a history of atherosclerotic coronary, cerebral, or abdominal aortic aneurysmal disease.Main Outcome MeasuresFrequency of detection of PAD; physician and patient awareness of PAD diagnosis; treatment intensity in PAD patients compared with treatment of other forms of CVD and with patients without clinical evidence of atherosclerosis.ResultsPAD was detected in 1865 patients (29%); 825 of these (44%) had PAD only, without evidence of CVD. Overall, 13% had PAD only, 16% had PAD and CVD, 24% had CVD only, and 47% had neither PAD nor CVD (the reference group). There were 457 patients (55%) with newly diagnosed PAD only and 366 (35%) with PAD and CVD who were newly diagnosed during the survey. Eighty-three percent of patients with prior PAD were aware of their diagnosis, but only 49% of physicians were aware of this diagnosis. Among patients with PAD, classic claudication was distinctly uncommon (11%). Patients with PAD had similar atherosclerosis risk factor profiles compared with those who had CVD. Smoking behavior was more frequently treated in patients with new (53%) and prior PAD (51%) only than in those with CVD only (35%; P <.001). Hypertension was treated less frequently in new (84%) and prior PAD (88%) only vs CVD only (95%; P <.001) and hyperlipidemia was treated less frequently in new (44%) and prior PAD (56%) only vs CVD only (73%, P<.001). Antiplatelet medications were prescribed less often in patients with new (33%) and prior PAD (54%) only vs CVD only (71%, P<.001). Treatment intensity for diabetes and use of hormone replacement therapy in women were similar across all groups.ConclusionsPrevalence of PAD in primary care practices is high, yet physician awareness of the PAD diagnosis is relatively low. A simple ABI measurement identified a large number of patients with previously unrecognized PAD. Atherosclerosis risk factors were very prevalent in PAD patients, but these patients received less intensive treatment for lipid disorders and hypertension and were prescribed antiplatelet therapy less frequently than were patients with CVD. These results demonstrate that underdiagnosis of PAD in primary care practice may be a barrier to effective secondary prevention of the high ischemic cardiovascular risk associated with PAD.

Proceedings Article
04 Aug 2001
TL;DR: It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.
Abstract: This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectation is computed using the conditional probability of each class given the example. Mathematically, let the entry in a cost matrix be the cost of predicting class when the true class is . If then the prediction is correct, while if the prediction is incorrect. The optimal prediction for an example is the class that minimizes ! (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For each , is a sum over the alternative possibilities for the true class of . In this framework, the role of a learning algorithm is to produce a classifier that for any example can estimate the probability " # of each class being the true class of . For an example , making the prediction means acting as if is the true class of . The essence of cost-sensitive decision-making is that it can be optimal to act as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is most likely legitimate. 1.1 Cost matrix properties A cost matrix always has the following structure when there are only two classes: actual negative actual positive predict negative $% $& ' )(!*+* $% -,. /(!*10 predict positive 2,& $& ' )(302* 2,& -,. /(30+0 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, while columns correspond to actual classes, i.e. row/column = / = predicted/actual. In our notation, the cost of a false positive is (302* while the cost of a false negative is (!*!0 . Conceptually, the cost of labeling an example incorrectly should always be greater than the cost of labeling it correctly. Mathematically, it should always be the case that ( 0 *54 ( *+* and ( *!064 ( 0 0 . We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated, so (!*+*879(302* but still (!*!0 4 (30+0 . In this case the optimal policy is to label all examples positive. Similarly, if (:0 * 4 (!*+* but (30 0;7 in a cost matrix if for all , = ?7 > @ A . In this case the cost of predicting > is no greater than the cost of predicting = , regardless of what the true class is. So it is optimal never to predict = . As a special case, the optimal prediction is always > if row > is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positive constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optimal are unchanged B if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline away from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonableness conditions can be transformed into a simpler matrix that always leads to the same decisions:

Journal ArticleDOI
31 Aug 2001-Science
TL;DR: It is shown that high doses of salicylates reverse hyperglycemia, hyperinsulinemia, and dyslipidemia in obese rodents by sensitizing insulin signaling and identifies the IKKβ pathway as a target for insulin sensitization.
Abstract: We show that high doses of salicylates reverse hyperglycemia, hyperinsulinemia, and dyslipidemia in obese rodents by sensitizing insulin signaling. Activation or overexpression of the IkappaB kinase beta (IKKbeta) attenuated insulin signaling in cultured cells, whereas IKKbeta inhibition reversed insulin resistance. Thus, IKKbeta, rather than the cyclooxygenases, appears to be the relevant molecular target. Heterozygous deletion (Ikkbeta+/-) protected against the development of insulin resistance during high-fat feeding and in obese Lep(ob/ob) mice. These findings implicate an inflammatory process in the pathogenesis of insulin resistance in obesity and type 2 diabetes mellitus and identify the IKKbeta pathway as a target for insulin sensitization.

Journal ArticleDOI
TL;DR: In this paper, the AERONET program of spectral aerosol optical depth, precipitable water, and derived Angstrom exponent were analyzed and compiled into a spectral optical properties climatology.
Abstract: Long-term measurements by the AERONET program of spectral aerosol optical depth, precipitable water, and derived Angstrom exponent were analyzed and compiled into an aerosol optical properties climatology. Quality assured monthly means are presented and described for 9 primary sites and 21 additional multiyear sites with distinct aerosol regimes representing tropical biomass burning, boreal forests, midlatitude humid climates, midlatitude dry climates, oceanic sites, desert sites, and background sites. Seasonal trends for each of these nine sites are discussed and climatic averages presented.

Journal ArticleDOI
TL;DR: The limited longitudinal database indicates that the UHDRS may be useful for tracking changes in the clinical features of HD over time and there was an excellent degree of interrater reliability for the motor scores.
Abstract: The Unified Huntington's disease Rating Scale (UHDRS) was developed as a clinical rating scale to assess four domains of clinical performance and capacity in HD: motor function, cognitive function, behavioral abnormalities, and functional capacity. We assessed the internal consistency and the intercorrelations for the four domains and examined changes in ratings over time. We also performed an interrater reliability study of the motor assessment. We found there was a high degree of internal consistency within each of the domains of the UHDRS and that there were significant intercorrelations between the domains of the UHDRS, with the exception of the total behavioral score. There was an excellent degree of interrater reliability for the motor scores. Our limited longitudinal database indicates that the UHDRS may be useful for tracking changes in the clinical features of HD over time. The UHDRS assesses relevant clinical features of HD and appears to be appropriate for repeated administration during clinical studies.

01 Mar 2001
TL;DR: In this article, it is proposed that when a charge current circulates in a paramagnetic metal, a transverse spin imbalance will be generated, giving rise to a spin Hall voltage, in the absence of charge current and magnetic field.
Abstract: It is proposed that when a charge current circulates in a paramagnetic metal a transverse spin imbalance will be generated, giving rise to a ``spin Hall voltage.'' Similarly, it is proposed that when a spin current circulates a transverse charge imbalance will be generated, giving rise to a Hall voltage, in the absence of charge current and magnetic field. Based on these principles we propose an experiment to generate and detect a spin current in a paramagnetic metal.

Journal ArticleDOI
TL;DR: Dementia criteria for dementia have improved since the 1994 practice parameter, and further research is needed to improve clinical definitions of dementia and its subtypes, as well as to determine the utility of various instruments of neuroimaging, biomarkers, and genetic testing in increasing diagnostic accuracy.
Abstract: Article abstract—Objective: To update the 1994 practice parameter for the diagnosis of dementia in the elderly. Background: The AAN previously published a practice parameter on dementia in 1994. New research and clinical developments warrant an update of some aspects of diagnosis. Methods: Studies published in English from 1985 through 1999 were identified that addressed four questions: 1) Are the current criteria for the diagnosis of dementia reliable? 2) Are the current diagnostic criteria able to establish a diagnosis for the prevalent dementias in the elderly? 3) Do laboratory tests improve the accuracy of the clinical diagnosis of dementing illness? 4) What comorbidities should be evaluated in elderly patients undergoing an initial assessment for dementia? Recommendations: Based on evidence in the literature, the following recommendations are made. 1) The DSM-III-R definition of dementia is reliable and should be used (Guideline). 2) The National Institute of Neurologic, Communicative Disorders and Stroke‐AD and Related Disorders Association (NINCDS-ADRDA) or the Diagnostic and Statistical Manual, 3rd edition, revised (DSM-IIIR) diagnostic criteria for AD and clinical criteria for Creutzfeldt‐Jakob disease (CJD) have sufficient reliability and validity and should be used (Guideline). Diagnostic criteria for vascular dementia, dementia with Lewy bodies, and frontotemporal dementia may be of use in clinical practice (Option) but have imperfect reliability and validity. 3) Structural neuroimaging with either a noncontrast CT or MR scan in the initial evaluation of patients with dementia is appropriate. Because of insufficient data on validity, no other imaging procedure is recommended (Guideline). There are currently no genetic markers recommended for routine diagnostic purposes (Guideline). The CSF 14-3-3 protein is useful for confirming or rejecting the diagnosis of CJD (Guideline). 4) Screening for depression, B12 deficiency, and hypothyroidism should be performed (Guideline). Screening for syphilis in patients with dementia is not justified unless clinical suspicion for neurosyphilis is present (Guideline). Conclusions: Diagnostic criteria for dementia have improved since the 1994 practice parameter. Further research is needed to improve clinical definitions of dementia and its subtypes, as well as to determine the utility of various instruments of neuroimaging, biomarkers, and genetic testing in increasing diagnostic accuracy.

Journal ArticleDOI
30 Apr 2001-Oncogene
TL;DR: Amongst the Jun proteins, c-Jun is unique in its ability to positively regulate cell proliferation through the repression of tumor suppressor gene expression and function, and induction of cyclin D1 transcription.
Abstract: A plethora of physiological and pathological stimuli induce and activate a group of DNA binding proteins that form AP-1 dimers. These proteins include the Jun, Fos and ATF subgroups of transcription factors. Recent studies using cells and mice deficient in individual AP-1 proteins have begun to shed light on their physiological functions in the control of cell proliferation, neoplastic transformation and apoptosis. Above all such studies have identified some of the target genes that mediate the effects of AP-1 proteins on cell proliferation and death. There is evidence that AP-1 proteins, mostly those that belong to the Jun group, control cell life and death through their ability to regulate the expression and function of cell cycle regulators such as Cyclin D1, p53, p21(cip1/waf1), p19(ARF) and p16. Amongst the Jun proteins, c-Jun is unique in its ability to positively regulate cell proliferation through the repression of tumor suppressor gene expression and function, and induction of cyclin D1 transcription. These actions are antagonized by JunB, which upregulates tumor suppressor genes and represses cyclin D1. An especially important target for AP-1 effects on cell life and death is the tumor suppressor p53, whose expression as well as transcriptional activity, are modulated by AP-1 proteins.

Journal ArticleDOI
TL;DR: A review of the literature on prepulse inhibition (PPI) in humans can be found in this article, where a relatively weak sensory event (the prepulse) is presented 30-500 ms before a strong startle-inducing stimulus, and reduces the magnitude of the startle response.
Abstract: Rationale: Since the mid-1970s, cross-species translational studies of prepulse inhibition (PPI) have increased at an astounding pace as the value of this neurobiologically informative measure has been optimized. PPI occurs when a relatively weak sensory event (the prepulse) is presented 30–500 ms before a strong startle-inducing stimulus, and reduces the magnitude of the startle response. In humans, PPI occurs in a robust, predictable manner when the prepulse and startling stimuli occur in either the same or different modalities (acoustic, visual, or cutaneous). Objective: This review covers three areas of interest in human PPI studies. First, we review the normal influences on PPI related to the underlying construct of sensori- (prepulse) motor (startle reflex) gating. Second, we review PPI studies in psychopathological disorders that form a family of gating disorders. Third, we review the relatively limited but interesting and rapidly expanding literature on pharmacological influences on PPI in humans. Methods: All studies identified by a computerized literature search that addressed the three topics of this review were compiled and evaluated. The principal studies were summarized in appropriate tables. Results: The major influences on PPI as a measure of sensorimotor gating can be grouped into 11 domains. Most of these domains are similar across species, supporting the value of PPI studies in translational comparisons across species. The most prominent literature describing deficits in PPI in psychiatrically defined groups features schizophrenia-spectrum patients and their clinically unaffected relatives. These findings support the use of PPI as an endophenotype in genetic studies. Additional groups of psychopathologically disordered patients with neuropathology involving cortico-striato-pallido-pontine circuits exhibit poor gating of motor, sensory, or cognitive information and corresponding PPI deficits. These groups include patients with obsessive compulsive disorder, Tourette's syndrome, blepharospasm, temporal lobe epilepsy with psychosis, enuresis, and perhaps post-traumatic stress disorder (PTSD). Several pharmacological manipulations have been examined for their effects on PPI in healthy human subjects. In some cases, the alterations in PPI produced by these drugs in animals correspond to similar effects in humans. Specifically, dopamine agonists disrupt and nicotine increases PPI in at least some human studies. With some other compounds, however, the effects seen in humans appear to differ from those reported in animals. For example, the PPI-increasing effects of the glutamate antagonist ketamine and the serotonin releaser MDMA in humans are opposite to the PPI-disruptive effects of these compounds in rodents. Conclusions: Considerable evidence supports a high degree of homology between measures of PPI in rodents and humans, consistent with the use of PPI as a cross-species measure of sensorimotor gating. Multiple investigations of PPI using a variety of methods and parameters confirm that deficits in PPI are evident in schizophrenia-spectrum patients and in certain other disorders in which gating mechanisms are disturbed. In contrast to the extensive literature on clinical populations, much more work is required to clarify the degree of correspondence between pharmacological effects on PPI in healthy humans and those reported in animals.

Journal ArticleDOI
TL;DR: This review focuses on proteins that transduce the signals generated at TNF receptors to nuclear targets such as AP-1 and NF-kappaB, which are likely to be used by other members of the TNF family.

Journal ArticleDOI
TL;DR: In this paper, the Lagrangian was constructed for an effective theory of highly energetic quarks with energy Q, interacting with collinear and soft gluons, and the heavy to light currents were matched onto operators in the effective theory at one loop.
Abstract: We construct the Lagrangian for an effective theory of highly energetic quarks with energy Q, interacting with collinear and soft gluons. This theory has two low energy scales, the transverse momentum of the collinear particles, ${p}_{\ensuremath{\perp}},$ and the scale ${p}_{\ensuremath{\perp}}^{2}/Q.$ The heavy to light currents are matched onto operators in the effective theory at one loop and the renormalization group equations for the corresponding Wilson coefficients are solved. This running is used to sum Sudakov logarithms in inclusive $\stackrel{\ensuremath{\rightarrow}}{B}{X}_{s}\ensuremath{\gamma}$ and $\stackrel{\ensuremath{\rightarrow}}{B}{X}_{u}l\overline{\ensuremath{ u}}$ decays. We also show that the interactions with collinear gluons preserve the relations for the soft part of the form factors for heavy-to-light decays found by Charles et al. [Phys. Rev. D 60, 014001 (1999)], establishing these relations in the large energy limit of QCD.

Book ChapterDOI
01 Jan 2001
TL;DR: In this paper, the authors consider the relationship between causation and co-integration, and suggest that if a pair of I(1) series are cointegrated, there must be causation in at least one direction.
Abstract: The paper considers three separate but related topics. (i) What is the relationship between causation and co-integration? If a pair of I(1) series are co-integration, there must be causation in at least one direction. An implication is that some tests of causation based on different series may have missed one source of causation. (ii) Is there a need for a definition of 'instantaneous causation' in a decision science? It is argued that no such definition is required. (iii) Can causality tests be used for policy evaluation? It is suggested that these tests are useful, but that they should be evaluated with case.

Journal ArticleDOI
08 Nov 2001-Nature
TL;DR: It is reported that the NSAIDs ibuprofen, indomethacin and sulindac sulphide preferentially decrease the highly amyloidogenic Aβ42 peptide (the 42-residue isoform of the amyloids-β peptide) produced from a variety of cultured cells by as much as 80%.
Abstract: Epidemiological studies have documented a reduced prevalence of Alzheimer's disease among users of nonsteroidal anti-inflammatory drugs (NSAIDs). It has been proposed that NSAIDs exert their beneficial effects in part by reducing neurotoxic inflammatory responses in the brain, although this mechanism has not been proved. Here we report that the NSAIDs ibuprofen, indomethacin and sulindac sulphide preferentially decrease the highly amyloidogenic Aβ42 peptide (the 42-residue isoform of the amyloid-β peptide) produced from a variety of cultured cells by as much as 80%. This effect was not seen in all NSAIDs and seems not to be mediated by inhibition of cyclooxygenase (COX) activity, the principal pharmacological target of NSAIDs. Furthermore, short-term administration of ibuprofen to mice that produce mutant β-amyloid precursor protein (APP) lowered their brain levels of Aβ42. In cultured cells, the decrease in Aβ42 secretion was accompanied by an increase in the Aβ(1–38) isoform, indicating that NSAIDs subtly alter γ-secretase activity without significantly perturbing other APP processing pathways or Notch cleavage. Our findings suggest that NSAIDs directly affect amyloid pathology in the brain by reducing Aβ42 peptide levels independently of COX activity and that this Aβ42-lowering activity could be optimized to selectively target the pathogenic Aβ42 species.

Journal ArticleDOI
TL;DR: Bosentan increases exercise capacity and improves haemodynamics in patients with pulmonary hypertension, suggesting that endothelin has an important role in pulmonary hypertension.

Journal ArticleDOI
TL;DR: A gene, called CIAS1, is expressed in peripheral blood leukocytes and encodes a protein with a pyrin domain, a nucleotide-binding site (NBS, NACHT subfamily) domain and a leucine-rich repeat (LRR) motif region, suggesting a role in the regulation of inflammation and apoptosis.
Abstract: Familial cold autoinflammatory syndrome (FCAS, MIM 120100), commonly known as familial cold urticaria (FCU), is an autosomal-dominant systemic inflammatory disease characterized by intermittent episodes of rash, arthralgia, fever and conjunctivitis after generalized exposure to cold. FCAS was previously mapped to a 10-cM region on chromosome 1q44 (refs. 5,6). Muckle-Wells syndrome (MWS; MIM 191900), which also maps to chromosome 1q44, is an autosomal-dominant periodic fever syndrome with a similar phenotype except that symptoms are not precipitated by cold exposure and that sensorineural hearing loss is frequently also present. To identify the genes for FCAS and MWS, we screened exons in the 1q44 region for mutations by direct sequencing of genomic DNA from affected individuals and controls. This resulted in the identification of four distinct mutations in a gene that segregated with the disorder in three families with FCAS and one family with MWS. This gene, called CIAS1, is expressed in peripheral blood leukocytes and encodes a protein with a pyrin domain, a nucleotide-binding site (NBS, NACHT subfamily) domain and a leucine-rich repeat (LRR) motif region, suggesting a role in the regulation of inflammation and apoptosis.

Proceedings Article
13 Aug 2001
TL;DR: This article presents a new technique, called “backscatter analysis,” that provides a conservative estimate of worldwide denial-of-service activity, and believes it is the first to provide quantitative estimates of Internet-wide denial- of- service activity.
Abstract: In this paper, we seek to answer a simple question: "How prevalent are denial-of-service attacks in the Internet today?". Our motivation is to understand quantitatively the nature of the current threat as well as to enable longer-term analyses of trends and recurring patterns of attacks. We present a new technique, called "backscatter analysis", that provides an estimate of worldwide denial-of-service activity. We use this approach on three week-long datasets to assess the number, duration and focus of attacks, and to characterize their behavior. During this period, we observe more than 12,000 attacks against more than 5,000 distinct targets, ranging from well known e-commerce companies such as Amazon and Hotmail to small foreign ISPs and dial-up connections. We believe that our work is the only publically available data quantifying denial-of-service activity in the Internet.

Journal ArticleDOI
TL;DR: While the PPI model based on the effects of direct DA agonists is the most well-validated for the identification of known antipsychotic drugs, the isolation rearing model also appears to be sensitive to both typical and atypical antipsychotics, and the 5-HT P PI model is less generally sensitive to antippsychotic medications, but can provide insight into the contribution of serotonergic systems to the actions of newer antipsychosis that act upon multiple receptors.
Abstract: Rationale: Patients with schizophrenia exhibit deficits in an operational measure of sensorimotor gating: prepulse inhibition (PPI) of startle. Similar deficits in PPI are produced in rats by pharmacological or developmental manipulations. These experimentally induced PPI deficits in rats are clearly not animal models of schizophrenia per se, but appear to provide models of sensorimotor gating deficits in schizophrenia patients that have face, predictive, and construct validity. In rodents, disruptions in PPI of startle are produced by: stimulation of D2 dopamine (DA) receptors, produced by amphetamine or apomorphine; activation of serotonergic systems, produced by serotonin (5-HT) releasers or direct agonists at multiple serotonin receptors; and blockade of N-methyl-D-aspartate (NMDA) receptors, produced by drugs such as phencyclidine (PCP). Accordingly, dopaminergic, serotonergic, and glutamatergic models of disrupted PPI have evolved and have been applied to the identification of potential antipsychotic treatments. In addition, some developmental manipulations, such as isolation rearing, have provided non-pharmacological animal models of the PPI deficits seen in schizophrenia. Objective: This review summarizes and evaluates studies assessing the effects of systemic drug administrations on PPI in rats. Methods: Studies examining systemic drug effects on PPI in rats prior to January 15, 2001 were compiled and organized into six annotated appendices. Based on this catalog of studies, the specific advantages and disadvantages of each of the four main PPI models used in the study of antipsychotic drugs were critically evaluated. Results: Despite some notable inconsistencies, the literature provides strong support for significant disruptions in PPI in rats produced by DA agonists, 5-HT2 agonists, NMDA antagonists, and isolation rearing. Each of these models exhibits sensitivity to at least some antipsychotic medications. While the PPI model based on the effects of direct DA agonists is the most well-validated for the identification of known antipsychotics, the isolation rearing model also appears to be sensitive to both typical and atypical antipsychotics. The 5-HT PPI model is less generally sensitive to antipsychotic medications, but can provide insight into the contribution of serotonergic systems to the actions of newer antipsychotics that act upon multiple receptors. The deficits in PPI produced by NMDA antagonists appear to be more sensitive to clozapine-like atypical antipsychotics than to typical antipsychotics. Hence, despite some exceptions to this generalization, the NMDA PPI model might aid in the identification of novel or atypical antipsychotic medications. Conclusions: Studies of drug effects on PPI in rats have generated four distinctive models that have utility in the identification of antipsychotic medications. Because each of these models has specific advantages and disadvantages, the choice of model to be used depends upon the question being addressed. This review should help to guide such decisions.

Journal ArticleDOI
TL;DR: A generalized reduction that is based on an algorithm that represents an arbitrary k-CNF formula as a disjunction of 2?nk-C NF formulas that are sparse, that is, each disjunct has O(n) clauses, and shows that Circuit-SAT is SERF-complete for all NP-search problems.

Journal ArticleDOI
TL;DR: This work abandons the classical “overlap–layout–consensus” approach in favor of a new euler algorithm that, for the first time, resolves the 20-year-old “repeat problem” in fragment assembly.
Abstract: For the last 20 years, fragment assembly in DNA sequencing followed the "overlap-layout-consensus" paradigm that is used in all currently available assembly tools. Although this approach proved useful in assembling clones, it faces difficulties in genomic shotgun assembly. We abandon the classical "overlap-layout-consensus" approach in favor of a new euler algorithm that, for the first time, resolves the 20-year-old "repeat problem" in fragment assembly. Our main result is the reduction of the fragment assembly to a variation of the classical Eulerian path problem that allows one to generate accurate solutions of large-scale sequencing problems. euler, in contrast to the celera assembler, does not mask such repeats but uses them instead as a powerful fragment assembly tool.

Journal ArticleDOI
01 Jan 2001
TL;DR: In this paper, two separate sets of forecasts of airline passenger data have been combined to form a composite set of forecasts, and different methods of deriving these weights have been examined.
Abstract: Two separate sets of forecasts of airline passenger data have been combined to form a composite set of forecasts. The main conclusion is that the composite set of forecasts can yield lower mean-square error than either of the original forecasts. Past errors of each of the original forecasts are used to determine the weights to attach to these two original forecasts in forming the combined forecasts, and different methods of deriving these weights are examined.