scispace - formally typeset
Search or ask a question

Showing papers by "Centre national de la recherche scientifique published in 2003"


Journal ArticleDOI
TL;DR: This work has used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches.
Abstract: The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/. (Algorithm; computer simulations; maximum likelihood; phylogeny; rbcL; RDPII project.) The size of homologous sequence data sets has in- creased dramatically in recent years, and many of these data sets now involve several hundreds of taxa. More- over, current probabilistic sequence evolution models (Swofford et al., 1996 ; Page and Holmes, 1998 ), notably those including rate variation among sites (Uzzell and Corbin, 1971 ; Jin and Nei, 1990 ; Yang, 1996 ), require an increasing number of calculations. Therefore, the speed of phylogeny reconstruction methods is becoming a sig- nificant requirement and good compromises between speed and accuracy must be found. The maximum likelihood (ML) approach is especially accurate for building molecular phylogenies. Felsenstein (1981) brought this framework to nucleotide-based phy- logenetic inference, and it was later also applied to amino acid sequences (Kishino et al., 1990). Several vari- ants were proposed, most notably the Bayesian meth- ods (Rannala and Yang 1996; and see below), and the discrete Fourier analysis of Hendy et al. (1994), for ex- ample. Numerous computer studies (Huelsenbeck and Hillis, 1993; Kuhner and Felsenstein, 1994; Huelsenbeck, 1995; Rosenberg and Kumar, 2001; Ranwez and Gascuel, 2002) have shown that ML programs can recover the cor- rect tree from simulated data sets more frequently than other methods can. Another important advantage of the ML approach is the ability to compare different trees and evolutionary models within a statistical framework (see Whelan et al., 2001, for a review). However, like all optimality criterion-based phylogenetic reconstruction approaches, ML is hampered by computational difficul- ties, making it impossible to obtain the optimal tree with certainty from even moderate data sets (Swofford et al., 1996). Therefore, all practical methods rely on heuristics that obtain near-optimal trees in reasonable computing time. Moreover, the computation problem is especially difficult with ML, because the tree likelihood not only depends on the tree topology but also on numerical pa- rameters, including branch lengths. Even computing the optimal values of these parameters on a single tree is not an easy task, particularly because of possible local optima (Chor et al., 2000). The usual heuristic method, implemented in the pop- ular PHYLIP (Felsenstein, 1993 ) and PAUP ∗ (Swofford, 1999 ) packages, is based on hill climbing. It combines stepwise insertion of taxa in a growing tree and topolog- ical rearrangement. For each possible insertion position and rearrangement, the branch lengths of the resulting tree are optimized and the tree likelihood is computed. When the rearrangement improves the current tree or when the position insertion is the best among all pos- sible positions, the corresponding tree becomes the new current tree. Simple rearrangements are used during tree growing, namely "nearest neighbor interchanges" (see below), while more intense rearrangements can be used once all taxa have been inserted. The procedure stops when no rearrangement improves the current best tree. Despite significant decreases in computing times, no- tably in fastDNAml (Olsen et al., 1994 ), this heuristic becomes impracticable with several hundreds of taxa. This is mainly due to the two-level strategy, which sepa- rates branch lengths and tree topology optimization. In- deed, most calculations are done to optimize the branch lengths and evaluate the likelihood of trees that are finally rejected. New methods have thus been proposed. Strimmer and von Haeseler (1996) and others have assembled four- taxon (quartet) trees inferred by ML, in order to recon- struct a complete tree. However, the results of this ap- proach have not been very satisfactory to date (Ranwez and Gascuel, 2001 ). Ota and Li (2000, 2001) described

16,261 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the properties of the host galaxies of 22 623 narrow-line active galactic nuclei (AGN) with 0.02 < z < 0.3 selected from a complete sample of 122 808 galaxies from the Sloan Digital Sky Survey.
Abstract: We examine the properties of the host galaxies of 22 623 narrow-line active galactic nuclei (AGN) with 0.02 < z < 0.3 selected from a complete sample of 122 808 galaxies from the Sloan Digital Sky Survey. We focus on the luminosity of the [O III] λ5007 emission line as a tracer of the strength of activity in the nucleus. We study how AGN host properties compare with those of normal galaxies and how they depend on L[O III]. We find that AGN of all luminosities reside almost exclusively in massive galaxies and have distributions of sizes, stellar surface mass densities and concentrations that are similar to those of ordinary early-type galaxies in our sample. The host galaxies of low-luminosity AGN have stellar populations similar to normal early types. The hosts of high-luminosity AGN have much younger mean stellar ages. The young stars are not preferentially located near the nucleus of the galaxy, but are spread out over scales of at least several kiloparsecs. A significant fraction of high-luminosity AGN have strong Hδ absorption-line equivalent widths, indicating that they experienced a burst of star formation in the recent past. We have also examined the stellar populations of the host galaxies of a sample of broad-line AGN. We conclude that there is no significant difference in stellar content between type 2 Seyfert hosts and quasars (QSOs) with the same [O III] luminosity and redshift. This establishes that a young stellar population is a general property of AGN with high [O III] luminosities.

3,781 citations


Journal ArticleDOI
TL;DR: This work summarizes the Systems Biology Markup Language (SBML) Level 1, a free, open, XML-based format for representing biochemical reaction networks, a software-independent language for describing models common to research in many areas of computational biology.
Abstract: Motivation: Molecular biotechnology now makes it possible to build elaborate systems models, but the systems biology community needs information standards if models are to be shared, evaluated and developed cooperatively. Results: We summarize the Systems Biology Markup Language (SBML) Level 1, a free, open, XML-based format for representing biochemical reaction networks. SBML is a software-independent language for describing models common to research in many areas of computational biology, including cell signaling pathways, metabolic pathways, gene regulation, and others. ∗ To whom correspondence should be addressed. Availability: The specification of SBML Level 1 is freely available from http://www.sbml.org/.

3,205 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used Monte Carlo realizations of different star formation histories, including starbursts of varying strength and a range of metallicities, to constrain the mean stellar ages of galaxies and the fractional stellar mass formed in bursts over the past few Gyr.
Abstract: We develop a new method to constrain the star formation histories, dust attenuation and stellar masses of galaxies. It is based on two stellar absorption-line indices, the 4000-A break strength and the Balmer absorption-line index Hδ A . Together, these indices allow us to constrain the mean stellar ages of galaxies and the fractional stellar mass formed in bursts over the past few Gyr. A comparison with broad-band photometry then yields estimates of dust attenuation and of stellar mass. We generate a large library of Monte Carlo realizations of different star formation histories, including starbursts of varying strength and a range of metallicities. We use this library to generate median likelihood estimates of burst mass fractions, dust attenuation strengths, stellar masses and stellar mass-to-light ratios for a sample of 122 808 galaxies drawn from the Sloan Digital Sky Survey. The typical 95 per cent confidence range in our estimated stellar masses is ′40 per cent. We study how the stellar mass-to-light ratios of galaxies vary as a function of absolute magnitude, concentration index and photometric passband and how dust attenuation varies as a function of absolute magnitude and 4000-A break strength. We also calculate how the total stellar mass of the present Universe is distributed over galaxies as a function of their mass, size, concentration, colour, burst mass fraction and surface mass density. We find that most of the stellar mass in the local Universe resides in galaxies that have, to within a factor of approximately 2, stellar masses ∼5 x 10 1 0 M O ., half-light radii ∼3 kpc and half-light surface mass densities ∼10 9 M O .kpc - 2 . The distribution of D n (4000) is strongly bimodal, showing a clear division between galaxies dominated by old stellar populations and galaxies with more recent star formation.

2,407 citations


Journal ArticleDOI
TL;DR: In this article, new constraints on evolution parameters obtained from the Besancon model of population synthesis and analysis of optical and near-infrared star counts are presented, in agreement with Hipparcos results and the observed rotation curve.
Abstract: Since the Hipparcos mission and recent large scale surveys in the optical and the near-infrared, new constraints have been obtained on the structure and evolution history of the Milky Way. The population synthesis approach is a useful tool to interpret such data sets and to test scenarios of evolution of the Galaxy. We present here new constraints on evolution parameters obtained from the Besancon model of population synthesis and analysis of optical and near-infrared star counts. The Galactic potential is computed self-consistently, in agreement with Hipparcos results and the observed rotation curve. Constraints are posed on the outer bulge structure, the warped and flared disc, the thick disc and the spheroid populations. The model is tuned to produce reliable predictions in the visible and the near-infrared in wide photometric bands from U to K. Finally, we describe applications such as photometric and astrometric simulations and a new classification tool based on a Bayesian probability estimator, which could be used in the framework of Virtual Observatories. As examples, samples of simulated star counts at different wavelengths and directions are also given.

2,259 citations


Journal ArticleDOI
TL;DR: It is suggested that a length of at least 25 mm is necessary to evaluate fibrosis accurately with a semiquantitative score, because variability in the distribution of fibrosis within the liver is a major limitation when using more accurate methods such as automated image analysis.

2,207 citations


Journal ArticleDOI
01 Dec 2003-Ecology
TL;DR: In this paper, the authors examine the relationship between climate and biodiversity and conclude that the interaction between water and energy, either directly or indirectly, provides a strong explanation for globally extensive plant and animal diversity gradients, but for animals there also is a latitudinal shift in the relative importance of ambient energy vs. water moving from the poles to the equator.
Abstract: It is often claimed that we do not understand the forces driving the global diversity gradient. However, an extensive literature suggests that contemporary climate constrains terrestrial taxonomic richness over broad geographic extents. Here, we review the empirical literature to examine the nature and form of the relationship between climate and richness. Our goals were to document the support for the climatically based energy hypothesis, and within the constraints imposed by correlative analyses, to evaluate two versions of the hypothesis: the productivity and ambient energy hypotheses. Focusing on studies extending over 800 km, we found that measures of energy, water, or water-energy balance explain spatial variation in richness better than other climatic and non-climatic variables in 82 of 85 cases. Even when considered individually and in isolation, water/ energy variables explain on average over 60% of the variation in the richness of a wide range of plant and animal groups. Further, water variables usually represent the strongest predictors in the tropics, subtropics, and warm temperate zones, whereas energy variables (for animals) or water-energy variables (for plants) dominate in high latitudes. We conclude that the interaction between water and energy, either directly or indirectly (via plant productivity), provides a strong explanation for globally extensive plant and animal diversity gradients, but for animals there also is a latitudinal shift in the relative importance of ambient energy vs. water moving from the poles to the equator. Although contemporary climate is not the only factor influencing species richness and may not explain the diversity pattern for all taxonomic groups, it is clear that understanding water-energy dynamics is critical to future biodiversity research. Analyses that do not include water-energy variables are missing a key component for explaining broad-scale patterns of diversity.

2,069 citations


Journal ArticleDOI
13 Jun 2003-Cell
TL;DR: A distinct heterochromatic structure that accumulates in senescent human fibroblasts is described, which is designated senescence-associated heterochROMatic foci (SAHF) and is associated with the stable repression of E2F target genes.

2,055 citations


Journal ArticleDOI
30 Oct 2003-Neuron
TL;DR: In this paper, the authors performed an fMRI study in which participants inhaled odorants producing a strong feeling of disgust and observed video clips showing the emotional facial expression of disgust, which activated the same sites in anterior insula and to a lesser extent in the anterior cingulate cortex.

1,904 citations


Journal ArticleDOI
TL;DR: It is found that, while numerous studies have examined the impacts of invasions on plant diversity and composition, less than 5% test whether these effects arise through competition, allelopathy, alteration of ecosystem variables or other processes.
Abstract: Although the impacts of exotic plant invasions on community structure and ecosystem processes are well appreciated, the pathways or mechanisms that underlie these impacts are poorly understood. Better exploration of these processes is essential to understanding why exotic plants impact only certain systems, and why only some invaders have large impacts. Here, we review over 150 studies to evaluate the mechanisms underlying the impacts of exotic plant invasions on plant and animal community structure, nutrient cycling, hydrology and fire regimes. We find that, while numerous studies have examined the impacts of invasions on plant diversity and composition, less than 5% test whether these effects arise through competition, allelopathy, alteration of ecosystem variables or other processes. Nonetheless, competition was often hypothesized, and nearly all studies competing native and alien plants against each other found strong competitive effects of exotic species. In contrast to studies of the impacts on plant community structure and higher trophic levels, research examining impacts on nitrogen cycling, hydrology and fire regimes is generally highly mechanistic, often motivated by specific invader traits. We encourage future studies that link impacts on community structure to ecosystem processes, and relate the controls over invasibility to the controls over impact.

1,634 citations


Journal ArticleDOI
TL;DR: In this article, the most important developments in multivariate ARCH-type modeling are surveyed, including model specifications, inference methods, and the main areas of application in financial econometrics.
Abstract: This paper surveys the most important developments in multivariate ARCH-type modelling. It reviews the model specifications, the inference methods, and the main areas of application of these models in financial econometrics.

Journal ArticleDOI
TL;DR: Overall, the analyses point to several important sources of variation in δ15N enrichment and suggest that the most important of them are the main biochemical form of nitrogen excretion and nutritional status.
Abstract: Measurements of δ15N of consumers are usually higher than those of their diet. This general pattern is widely used to make inferences about trophic relationships in ecological studies, although the underlying mechanisms causing the pattern are poorly understood. However, there can be substantial variation in consumer-diet δ15N enrichment within this general pattern. We conducted an extensive literature review, which yielded 134 estimates from controlled studies of consumer-diet δ15N enrichment, to test the significance of several potential sources of variation by means of meta-analyses. We found patterns related to processes of nitrogen assimilation and excretion. There was a significant effect of the main biochemical form of nitrogenous waste: ammonotelic organisms show lower δ15N enrichment than ureotelic or uricotelic organisms. There were no significant differences between animals feeding on plant food, animal food, or manufactured mixtures, but detritivores yielded significantly lower estimates of enrichment. δ15N enrichment was found to increase significantly with the C:N ratio of the diet, suggesting that a nitrogen-poor diet can have an effect similar to that already documented for fasting organisms. There were also differences among taxonomic classes: molluscs and crustaceans generally yielded lower δ15N enrichment. The lower δ15N enrichment might be related to the fact that molluscs and crustaceans excrete mainly ammonia, or to the fact that many were detritivores. Organisms inhabiting marine environments yielded significantly lower estimates of δ15N enrichment than organisms inhabiting terrestrial or freshwater environments, a pattern that was influenced by the number of marine, ammonotelic, crustaceans and molluscs. Overall, our analyses point to several important sources of variation in δ15N enrichment and suggest that the most important of them are the main biochemical form of nitrogen excretion and nutritional status. The variance of estimates of δ15N enrichment, as well as the fact that enrichment may be different in certain groups of organisms should be taken into account in statistical approaches for studying diet and trophic relationships.

Journal ArticleDOI
19 Jun 2003-Nature
TL;DR: It is shown that magnetic exchange coupling induced at the interface between ferromagnetic and antiferromagnetic systems can provide an extra source of anisotropy, leading to magnetization stability.
Abstract: Interest in magnetic nanoparticles has increased in the past few years by virtue of their potential for applications in fields such as ultrahigh-density recording and medicine. Most applications rely on the magnetic order of the nanoparticles being stable with time. However, with decreasing particle size the magnetic anisotropy energy per particle responsible for holding the magnetic moment along certain directions becomes comparable to the thermal energy. When this happens, the thermal fluctuations induce random flipping of the magnetic moment with time, and the nanoparticles lose their stable magnetic order and become superparamagnetic. Thus, the demand for further miniaturization comes into conflict with the superparamagnetism caused by the reduction of the anisotropy energy per particle: this constitutes the so-called 'superparamagnetic limit' in recording media. Here we show that magnetic exchange coupling induced at the interface between ferromagnetic and antiferromagnetic systems can provide an extra source of anisotropy, leading to magnetization stability. We demonstrate this principle for ferromagnetic cobalt nanoparticles of about 4 nm in diameter that are embedded in either a paramagnetic or an antiferromagnetic matrix. Whereas the cobalt cores lose their magnetic moment at 10 K in the first system, they remain ferromagnetic up to about 290 K in the second. This behaviour is ascribed to the specific way ferromagnetic nanoparticles couple to an antiferromagnetic matrix.

Journal ArticleDOI
06 Nov 2003-Nature
TL;DR: Analysis of this defence by molecular genetics has now provided a global picture of the mechanisms by which this insect senses infection, discriminates between various classes of microorganisms and induces the production of effector molecules, among which antimicrobial peptides are prominent.
Abstract: Drosophila mounts a potent host defence when challenged by various microorganisms. Analysis of this defence by molecular genetics has now provided a global picture of the mechanisms by which this insect senses infection, discriminates between various classes of microorganisms and induces the production of effector molecules, among which antimicrobial peptides are prominent. An unexpected result of these studies was the discovery that most of the genes involved in the Drosophila host defence are homologous or very similar to genes implicated in mammalian innate immune defences. Recent progress in research on Drosophila immune defence provides evidence for similarities and differences between Drosophila immune responses and mammalian innate immunity.

Journal ArticleDOI
TL;DR: It is shown that specific ‘silencing’ of PHD2 with short interfering RNAs is sufficient to stabilize and activate HIF‐1α in normoxia in all the human cells investigated, concluding that, in vivo, PHDs have distinct assigned functions.
Abstract: Hypoxia-inducible factor (HIF), a transcriptional complex conserved from Caenorhabditis elegans to vertebrates, plays a pivotal role in cellular adaptation to low oxygen availability. In normoxia, the HIF-α subunits are targeted for destruction by prolyl hydroxylation, a specific modification that provides recognition for the E3 ubiquitin ligase complex containing the von Hippel–Lindau tumour suppressor protein (pVHL). Three HIF prolyl-hydroxylases (PHD1, 2 and 3) were identified recently in mammals and shown to hydroxylate HIF-α subunits. Here we show that specific ‘silencing’ of PHD2 with short interfering RNAs is sufficient to stabilize and activate HIF-1α in normoxia in all the human cells investigated. ‘Silencing’ of PHD1 and PHD3 has no effect on the stability of HIF-1α either in normoxia or upon re-oxygenation of cells briefly exposed to hypoxia. We therefore conclude that, in vivo, PHDs have distinct assigned functions, PHD2 being the critical oxygen sensor setting the low steady-state levels of HIF-1α in normoxia. Interestingly, PHD2 is upregulated by hypoxia, providing an HIF-1-dependent auto-regulatory mechanism driven by the oxygen tension.

Journal ArticleDOI
11 Dec 2003-Nature
TL;DR: It is concluded that rising temperature since the mid-1980s has modified the plankton ecosystem in a way that reduces the survival of young cod.
Abstract: The Atlantic cod (Gadus morhua L.) has been overexploited in the North Sea since the late 1960s and great concern has been expressed about the decline in cod biomass and recruitment. Here we show that, in addition to the effects of overfishing, fluctuations in plankton have resulted in long-term changes in cod recruitment in the North Sea (bottom-up control). Survival of larval cod is shown to depend on three key biological parameters of their prey: the mean size of prey, seasonal timing and abundance. We suggest a mechanism, involving the match/mismatch hypothesis, by which variability in temperature affects larval cod survival and conclude that rising temperature since the mid-1980s has modified the plankton ecosystem in a way that reduces the survival of young cod.

Journal ArticleDOI
16 Jan 2003-Nature
TL;DR: This work proposes and experimentally demonstrate a quantum key distribution protocol based on the transmission of gaussian-modulated coherent states and shot-noise-limited homodyne detection, which is in principle secure for any value of the line transmission, against gaussian individual attacks based on entanglement and quantum memories.
Abstract: Quantum continuous variables are being explored as an alternative means to implement quantum key distribution, which is usually based on single photon counting. The former approach is potentially advantageous because it should enable higher key distribution rates. Here we propose and experimentally demonstrate a quantum key distribution protocol based on the transmission of gaussian-modulated coherent states (consisting of laser pulses containing a few hundred photons) and shot-noise-limited homodyne detection; squeezed or entangled beams are not required. Complete secret key extraction is achieved using a reverse reconciliation technique followed by privacy amplification. The reverse reconciliation technique is in principle secure for any value of the line transmission, against gaussian individual attacks based on entanglement and quantum memories. Our table-top experiment yields a net key transmission rate of about 1.7 megabits per second for a loss-free line, and 75 kilobits per second for a line with losses of 3.1 dB. We anticipate that the scheme should remain effective for lines with higher losses, particularly because the present limitations are essentially technical, so that significant margin for improvement is available on both the hardware and software.

Journal ArticleDOI
TL;DR: In this paper, a general review of the advances in widebandgap semiconductor photodetectors is presented, including SiC, diamond, III-nitrides and ZnS.
Abstract: Industries such as the automotive, aerospace or military, as well as environmental and biological research have promoted the development of ultraviolet (UV) photodetectors capable of operating at high temperatures and in hostile environments. UV-enhanced Si photodiodes are hence giving way to a new generation of UV detectors fabricated from wide-bandgap semiconductors, such as SiC, diamond, III-nitrides, ZnS, ZnO, or ZnSe. This paper provides a general review of latest progresses in wide-bandgap semiconductor photodetectors.

Journal ArticleDOI
TL;DR: Infectious pseudo-particles generated by displaying unmodified and functional HCV glycoproteins onto retroviral and lentiviral core particles may mimic the early infection steps of parental HCV and will be suitable for the development of much needed new antiviral therapies.
Abstract: The study of hepatitis C virus (HCV), a major cause of chronic liver disease, has been hampered by the lack of a cell culture system supporting its replication. Here, we have successfully generated infectious pseudo-particles that were assembled by displaying unmodified and functional HCV glycoproteins onto retroviral and lentiviral core particles. The presence of a green fluorescent protein marker gene packaged within these HCV pseudo-particles allowed reliable and fast determination of infectivity mediated by the HCV glycoproteins. Primary hepatocytes as well as hepato-carcinoma cells were found to be the major targets of infection in vitro. High infectivity of the pseudo-particles required both E1 and E2 HCV glycoproteins, and was neutralized by sera from HCV-infected patients and by some anti-E2 monoclonal antibodies. In addition, these pseudo-particles allowed investigation of the role of putative HCV receptors. Although our results tend to confirm their involvement, they provide evidence that neither LDLr nor CD81 is sufficient to mediate HCV cell entry. Altogether, these studies indicate that these pseudo-particles may mimic the early infection steps of parental HCV and will be suitable for the development of much needed new antiviral therapies.

Journal ArticleDOI
TL;DR: Results from different approaches in understanding the integrative properties of neocortical neurons in the intact brain are summarized to help understand the complex interplay between the active properties of dendrites and how they convey discrete synaptic inputs to the soma.
Abstract: Intracellular recordings in vivo have shown that neocortical neurons are subjected to an intense synaptic bombardment in intact networks and are in a 'high-conductance' state. In vitro studies have shed light on the complex interplay between the active properties of dendrites and how they convey discrete synaptic inputs to the soma. Computational models have attempted to tie these results together and predicted that high-conductance states profoundly alter the integrative properties of cortical neurons, providing them with a number of computational advantages. Here, we summarize results from these different approaches, with the aim of understanding the integrative properties of neocortical neurons in the intact brain.

Journal ArticleDOI
24 Jan 2003-Cell
TL;DR: It is demonstrated that GNOM localizes to endosomes and is required for their structural integrity and suggested that ARF-GEFs regulate specific endosomal trafficking pathways.

Journal ArticleDOI
TL;DR: The PaD method was found to be the most useful as it gave the most complete results, followed by the Profile method that gave the contribution profile of the input variables, and the classical stepwise methods gave the poorest results.

Journal ArticleDOI
TL;DR: Using peptide mapping to determine 'active' antigen recognition residues, molecular modeling, and a molecular dynamics trajectory analysis, a peptide mimic of an anti-CD4 antibody is developed, containing antigen contact residues from multiple CDRs.

Journal ArticleDOI
TL;DR: It is concluded that rather than using a single modelling technique to predict the distribution of several species, it would be more reliable to use a framework assessing different models for each species and selecting the most accurate one using both evaluation methods and expert knowledge.
Abstract: A new computation framework (BIOMOD: BIOdiversity MODelling) is presented, which aims to maximize the predictive accuracy of current species distributions and the reliability of future potential distributions using different types of statistical modelling methods. BIOMOD capitalizes on the different techniques used in static modelling to provide spatial predictions. It computes, for each species and in the same package, the four most widely used modelling techniques in species predictions, namely Generalized Linear Models (GLM), Generalized Additive Models (GAM), Classification and Regression Tree analysis (CART) and Artificial Neural Networks (ANN). BIOMOD was applied to 61 species of trees in Europe using climatic quantities as explanatory variables of current distributions. On average, all the different modelling methods yielded very good agreement between observed and predicted distributions. However, the relative performance of different techniques was idiosyncratic across species, suggesting that the most accurate model varies between species. The results of this evaluation also highlight that slight differences between current predictions from different modelling techniques are exacerbated in future projections. Therefore, it is difficult to assess the reliability of alternative projections without validation techniques or expert opinion. It is concluded that rather than using a single modelling technique to predict the distribution of several species, it would be more reliable to use a framework assessing different models for each species and selecting the most accurate one using both evaluation methods and expert knowledge.

Journal ArticleDOI
29 Jan 2003
TL;DR: The improvements, difficulties, and successes that have occured with the synchronous languages since then are discussed.
Abstract: Twelve years ago, Proceedings of the IEEE devoted a special section to the synchronous languages. This paper discusses the improvements, difficulties, and successes that have occured with the synchronous languages since then. Today, synchronous languages have been established as a technology of choice for modeling, specifying, validating, and implementing real-time embedded applications. The paradigm of synchrony has emerged as an engineer-friendly design method based on mathematically sound tools.

Journal ArticleDOI
06 Nov 2003-Nature
TL;DR: The bovine carrier structure, together with earlier biochemical results, suggests that transport substrates bind to the bottom of the cavity and that translocation results from a transient transition from a ‘pit’ to a ’channel’ conformation.
Abstract: ATP, the principal energy currency of the cell, fuels most biosynthetic reactions in the cytoplasm by its hydrolysis into ADP and inorganic phosphate. Because resynthesis of ATP occurs in the mitochondrial matrix, ATP is exported into the cytoplasm while ADP is imported into the matrix. The exchange is accomplished by a single protein, the ADP/ATP carrier. Here we have solved

Journal ArticleDOI
10 Jan 2003-Science
TL;DR: Functional analysis of the S140G mutant revealed a gain-of-function effect on the KCNQ1/KCNE1 and theKCNQ 1/ KCNE2 currents, which contrasts with the dominant negative or loss-of -function effects of the KCnQ1 mutations previously identified in patients with long QT syndrome.
Abstract: Atrial fibrillation (AF) is a common cardiac arrhythmia whose molecular etiology is poorly understood. We studied a family with hereditary persistent AF and identified the causative mutation (S140G) in the KCNQ1 (KvLQT1) gene on chromosome 11p15.5. The KCNQ1 gene encodes the pore-forming alpha subunit of the cardiac I(Ks) channel (KCNQ1/KCNE1), the KCNQ1/KCNE2 and the KCNQ1/KCNE3 potassium channels. Functional analysis of the S140G mutant revealed a gain-of-function effect on the KCNQ1/KCNE1 and the KCNQ1/KCNE2 currents, which contrasts with the dominant negative or loss-of-function effects of the KCNQ1 mutations previously identified in patients with long QT syndrome. Thus, the S140G mutation is likely to initiate and maintain AF by reducing action potential duration and effective refractory period in atrial myocytes.

Journal ArticleDOI
TL;DR: In this article, the authors present a complete description of the CHOOZ experiment, including the source and detector, the calibration methods and stability checks, the event reconstruction procedures and the Monte Carlo simulation.
Abstract: This final article about the CHOOZ experiment presents a complete description of the $\bar{ u}_e$ source and detector, the calibration methods and stability checks, the event reconstruction procedures and the Monte Carlo simulation. The data analysis, systematic effects and the methods used to reach our conclusions are fully discussed. Some new remarks are presented on the deduction of the confidence limits and on the correct treatment of systematic errors.


Journal ArticleDOI
TL;DR: In this paper, the authors examine the efficiency of different incentive schemes for the development of renewable energy sources, both from a theoretical point of view by comparing price-based approaches with quantity-based approach, and from a practical view by looking at concrete examples of how these different instruments have been implemented.