scispace - formally typeset
Search or ask a question

Showing papers by "University of Lorraine published in 2008"



01 Jan 2008
TL;DR: The enzymatic polymerization of rutin catalyzed by laccase from Trametes versicolor was investigated under different operating conditions of temperature, pH, solvent, enzyme and substrate concentrations, and oligomers were characterized by a solubility 4200 times higher than r Rutin.
Abstract: The enzymatic polymerization of rutin catalyzed by laccase from Trametes versicolor was investigated under different operating conditions of temperature, pH, solvent, enzyme and substrate concentrations. The highest weight-average molecular mass ( w M ) was about 3900 g/mol. Highest masses were obtained for lowest pH and temperature set point. The production of oligomers was favored when using a cosolvent with a high dielectric constant. MALDI-TOF analyses showed the presence of rutin oligomers with a polymerization degree of up to 6, resulting from simple bridges between rutin unities. 1 H-NMR analyses showed the presence of C-C or C-O linkages in the structure of oligomers, involving both the sugar and phenolic parts of rutin. Oligomers were characterized by a solubility 4200 times higher than rutin. A molecular modeling study of the hexamer indicated a dense network of H-bonds with water molecules. Fractions enriched with oligomers were obtained by tangential diafiltration. The antioxidant activity of oligomers was shown to decrease with w M ,

20 citations


Posted Content
TL;DR: In this paper, the authors introduce and construct a state dependent counting and persistent random walk for predicting insured claims based on their current and past period claim and calculate the probability generating function of the number of claims over time and as a result are able to calculate their moments.
Abstract: The purpose of this paper is to introduce and construct a state dependent counting and persistent random walk. Persistence is imbedded in a Markov chain for predicting insured claims based on their current and past period claim. We calculate for such a process, the probability generating function of the number of claims over time and as a result are able to calculate their moments. Further, given the claims severity probability distribution, we provide both the claims process generating function as well as the mean and the claim variance that an insurance firm confronts over a given period of time and in such circumstances. A number of results and applications are then outlined (such as a Compound Claim Persistence Process).

14 citations


Journal ArticleDOI
TL;DR: During later stages of adhesion, bacteria in heterogeneous cultures likely experience a lower electrostatic repulsion from already adhering bacteria than bacteria in homogeneous cultures, thus allowing a closer proximity of the bacteria with respect to each other, which ultimately leads to increased adhesion after 4 h.

13 citations


Journal ArticleDOI
TL;DR: It was found that the main contributions responsible for stabilization of the dicationic systems are the P and ES energies; in the monocationic systems, the CT stabilization is equally important.
Abstract: The interactions of a dioxadithia crown ether ligand with Li+, Na+, K+, Mg2+, Ca2+, and Zn2+ cations were investigated using density functional theory (DFT) modeling. The modeling was undertaken to...

13 citations


Proceedings ArticleDOI
01 Jan 2008
TL;DR: In this paper, a morphologique constructionnelle is used to determine whether l'interpretation of nouns in OIR is a fonction predictible of the regle de nominalisation and whether elle depend des proprietes du verbe de base.
Abstract: Cet article presente une etude des noms deverbaux en OIR du francais qui designent des entites (lavoir, baignoire, tiroir) en les abordant du point de vue de leur formation par la morphologique constructionnelle. L'objectif est de determiner si l'interpretation de ces noms est une fonction predictible de la regle de nominalisation et si elle depend des proprietes du verbe de base, comme on a pu l'observer pour d'autres nominalisations ou si d'autres parametres entrent en jeu. L'etude est menee sur un corpus original d'une grande envergure qui rassemble tous les noms deverbaux en OIR du TLF auxquels s'ajoute un corpus tout aussi consequent de neologismes recueillis sur la Toile. L'analyse est menee sur deux fronts dont on evaluera l'interaction : l'identification de l'interpretation des noms deverbaux en OIR et la mise au jour des proprietes aspectuelles, argumentales et syntaxiques des verbes de base. Les resultats auxquels nous aboutissons au terme de la recherche montrent que ces deverbaux bien qu'ils presentent des proprietes interpretatives centrales et peripheriques comme d'autres nominalisations s'en distinguent par l'absence de regularite observable entre la structure argumentale du verbe de base et l'interpretation du derive. A l'inverse, nous montrons que la regle de formation de ces noms met en œuvre deux contraintes : (1) le derive en OIR s'identifie a un participant au scenario implique par le predicat verbal de base (le lieu ou l'instrument) qui n'est pas prevu dans la structure argumentale de celui-ci : on y reconnait un ajout syntaxique ; (2) le type semantique de cet ajout est correle a la transitivite du verbe de base : les verbes transitifs privilegient la modification par un ajout instrumental, alors que les verbes intransitifs servent preferentiellement de base a des noms en OIR modifieurs locatifs.

12 citations


Journal ArticleDOI
TL;DR: In this paper, a new definition of non-crystallographic symmetry is proposed, which fully complies with that of "crys tallographic symmetry" in Volume A of the International Tables for Crystallography.
Abstract: The definition of "noncrystallographic symme- try" given in Volume B of the International Tables for Crys- tallography actually corresponds to the concept of "local symmetry". A new definition of "noncrystallographic sym- metry" is proposed, which fully complies with that of "crys- tallographic symmetry" in Volume A of the International Ta- bles for Crystallography. The concept of "noncrystallographic symmetry" is, quite obviously, directly related to that of "crystallographic symmetry" so that once a definition of the latter is gi- ven, that of the former is obtained spontaneously. This is probably the reason why no explicit definition of "non- crystallographic symmetry" is given in Volume A of the International Tables for Crystallography. Even the con- cept of noncrystallographic point groups, presented in section 10.1.4, is introduced without an explicit defini- tion but only as the groups differing from the crystallo- graphic point groups presented in Sections 10.1.2 and 10.1.3. As a matter of fact, one should not even need an expli- cit definition of "noncrystallographic symmetry" once that of "crystallographic symmetry" is given. Unfortunately, in the literature, and especially in the structural biology lit- erature, the term "noncrystallographic symmetry" is used in a manner in striking contrast with what is directly im- plied by the definition of "crystallographic symmetry". This contradiction is so fundamental that it causes serious misunderstandings and misinterpretations. We will show that the use of "noncrystallographic symmetry" in structur- al biology is inconsistent with the accepted definition of "crystallographic symmetry" and suggest that an alterna- tive terminology should be used. To understand the problem let us start with the defini- tion of "crystallographic symmetry operations" given in (1) Section 8.1.5: A motion is called a crystallographic symmetry opera- tion if a crystal pattern exists for which it is a symmetry operation. A motion is an isometry, a transformation keeping an- gles and distances unchanged, i.e. a transformation with- out deformation. A crystal pattern is the extension of a crystal structure to a periodic arrangement of whatever ob- ject, concrete or abstract, constitutes the structure. The atoms forming a crystal structure represent a special case of a crystal pattern. This definition applies to the n-dimen- sional Euclidean space E n and can be expressed in a quan- titative way with the aid of group theory. With respect to a basis of E n , the symmetry operations of the space group of the crystal pattern are represented by (n þ 1) � (n þ 1) augmented matrices where the top-left nn block repre- sents the linear part of the operation (the part that leaves the origin fixed) and the additional column represents the vector part of the operation (the part which gives the trans- lation component of the symmetry operation). The above definition of a crystallographic symmetry operation im- plies that with respect to a suitable basis of E n (namely a primitive basis of the periodic pattern) the linear part of the matrix representing the operation is an integral matrix. In E 2 and E 3 this results in the well-known crystallo- graphic restriction according to which only rotations (di- rect or inverse) of order 1, 2, 3, 4 and 6 are compatible with the existence of a crystal pattern. This restriction is extended to include also operations of order 5, 8, 10 and 12 for the four-dimensional space (2). By an obvious contraposition, one is led to the defini- tion of a noncrystallographic symmetry operation as a mo- tion for which no crystal pattern exists allowing this mo- tion as a symmetry operation. In particular, in E 2 and E 3 , a noncrystallographic symmetry operation is a motion whose linear part is different from rotations (direct or in- verse) of order 1, 2, 3, 4 and 6, and in E 4 different also

10 citations


Journal ArticleDOI
TL;DR: The two new crown ethers presented in this study were synthesized in order to investigate two important features of ionophores, namely metal cation complexation and interfacial properties, and the way in which they interrelate.
Abstract: The two new crown ethers presented in this study were synthesized in order to investigate two important features of ionophores, namely metal cation complexation and interfacial properties, and the way in which they interrelate. The two derivatives were conceived as analogs of membrane phospholipids with respect to their amphiphilicity and geometry. They contain a hydrophilic 1,1' -dioxo-3,3'-dithio-14-crown ether headgroup and bear two myristoyl or stearoyl lateral chains. The length of the myristoyl and stearoyl derivatives in an extended conformation is comparable with the thickness of the individual leaflets of cell membranes. The membrane-related and complexation properties of the two crown ether derivatives were studied in monomolecular films spread on pure water and on aqueous solutions of mono-, di-, and trivalent metal salts. The properties of the monolayers are described quantitatively using thermodynamic models. The compression isotherms of the monolayers formed on different subphases show a clear-cut differentiation of the monovalent and di- or trivalent cations with both ligands. This differentiation was interpreted in terms of conformational changes occurring in the crown ether derivatives upon complexation. Molecular modeling indicates that the mono- and divalent cations are coordinated differently by the ligands, yielding complexes with different conformations. The differences of the conformations of the mono- and di- or trivalent cation complexes may be important from the point of view of the interactions with lipid membranes and the biological activity of these potential ionophores.

8 citations


Journal ArticleDOI
15 Jun 2008
TL;DR: In this paper, the authors analyse les processus de qualification a l'œuvre dans cette activite, sous l'angle des personnes, du travail et de l'emploi, discute du sens de la qualification dans le cadre de cet activite emergente, tres exposee a la concurrence and a la flexibilite.
Abstract: L’activite de relation clientele par telephone qui s’opere dans les centres d’appels est souvent presentee comme une activite simple, repetitive, organisee selon un mode proche du modele taylorien et d’un travail dequalifie, tres controle, n’exigeant pas ou peu de competences et au final mal remunere. L’article analyse les processus de qualification a l’œuvre dans cette activite, sous l’angle des personnes, du travail et de l’emploi et discute du sens de la qualification dans le cadre de cette activite emergente, tres exposee a la concurrence et a la flexibilite. Il montre qu’elle est encore peu construite, ni socialement, ni institutionnellement. La regulation du secteur apparait comme une double necessite, a la fois sociale et economique.

8 citations


Book
01 Jan 2008
TL;DR: The Societe de l'Information as mentioned in this paper is a collectif collectif, which rassemble treize chapitres originaux de chercheurs en Sciences de l"Information et de la Communication, temoigne du caractere sensible de quelques problematiques scientifiques.
Abstract: Malgre le discours politique et economique rode, qui accompagne la profusion techno-informationnelle d'ou emerge la Societe de l'Information, des situations inedites apparaissent, des interrogations se font jour, des complexites se revelent, tant au niveau des individus censes maintenir un niveau d'activite satisfaisant dans cette societe en mutation, qu'a celui de la macro-structure societale qui doit (re)trouver une nouvelle stabilite. Au regard des problematiques identifiees par de nombreux chercheurs, relatives a l'archivage de donnees numeriques structurees ou non, l'accessibilite et le partage, la propriete intellectuelle, le document numerique, la Recherche d'Information, les modalites d'interaction avec les systemes techniques, les competences numeriques et informationnelles, la qualification et la pertinence de l'information, les profils informationnels, etc., les planifications politiques de la Societes de l'Information ne peuvent que susciter circonspection voire embarras d'un point de vue scientifique. Cet ouvrage collectif, qui rassemble treize chapitres originaux de chercheurs en Sciences de l'Information et de la Communication, temoigne du caractere sensible de quelques problematiques scientifiques, presentees dans ce volume, qui sont pour certaines d'entre elles, anciennes, et pour d'autres, plus recemment formulees, et que la nouvelle ere numerique a mis en lumiere.

8 citations


Journal ArticleDOI
01 Feb 2008
TL;DR: In this article, several specific distributions of the magnetic field are determined, and these distributions allow compensating gravity by means of axisymmetric coils (solenoids), which are useful for different kinds of microgravity experiments.
Abstract: It is important for space research to study the behavior of fluids such as liquid oxygen and liquid hydrogen under weightless conditions (microgravity). In addition, since 1991 some magnetic ground-based stations have allowed compensating gravity and meeting space conditions. Magnetic devices allow low-cost microgravity experiments with unlimited time. The goal of these techniques is to reach the same or better conditions (residual acceleration of the studied fluid) than those during parabolic flights. In this paper, several specific distributions of the magnetic field are determined. These distributions allow compensating gravity by means of axisymmetric coils (solenoids). This paper introduces several distributions of the residual forces useful for different kinds of microgravity experiments.

Journal ArticleDOI
TL;DR: In this article, the influence of self-serving bias on pretrial negotiations is analyzed and it is shown that the effect is mitigated by the litigants' confidence in their own ability to predict the verdict.
Abstract: For contemporary legal theory, law is essentially an interpretative and hermeneutics practice (Ackerman (1991), Horwitz (1992)). A straightforward consequence is that legal disputes between parties are motivated by their divergent interpretations regarding what the law says on their case. This point of view fits well with the growing evidence showing that litigants’ cognitive performances display optimistic bias or self-serving bias (Babcock and Lowenstein (1997)). This paper provides a theoretical analysis of the influence of such a cognitive bias on pretrial negotiations. However, we also consider that this effect is mitigated because of the litigants’ confidence in their own ability to predict the verdict; we model this issue assuming that litigants are risk averse in the sense of Yaari (1987), i.e. they display a kind of (rational) probability distortion which is also well documented in experimental economics. In a model a la Bebcuck (1984), we show that the consequences of self-serving bias are partially consistent with the "optimistic model", but that parties’ risk aversion has more ambiguous/unpredictable effects. These results contribute to explaining that the beliefs in the result of the trial are not sufficient in themselves to understand the behaviors of litigants. As suggested by legal theory, the confidence the parties have in their beliefs is probably more important. JEL classification: D81, K42.

01 Jan 2008
TL;DR: In this article, the authors analyzed the three levels of sustainability (economical, environmental and social levels) for each component of the small ruminants dairy supply chain (producers, processors, distributors and consumers), provide a better understanding of this chain strengths and weaknesses, and present recommendations for improvement.
Abstract: Small ruminant production systems in the Middle Eastern region in general and specifically in Lebanon are facing a wide array of problems such as feed shortage, grazing and labour expenses, low productivity and poor management of the organic matter. In order to understand these problems and propose adequate solutions, it is important to consider the farming systems as a part of a larger supply chain, including other interrelated components. The aim of this study is to analyse the three levels of sustainability (economical, environmental and social levels) for each component of the small ruminants dairy supply chain (producers, processors, distributors and consumers), provide a better understanding of this chain strengths and weaknesses, and present recommendations for improvement. To reach this objective, a series of surveys covering specific sustainability parameters have been carried over different Lebanese regions, for every component. The surveys, which included over 129 producers, 15 processors, and 83 distributors, led to the calculation of 12 sustainability indicators. Using principal component analysis and analysis of variance provided a profiling of the individuals distribution according to their sustainability situation. The survey also covered 250 consumers, and a multiple component analysis showed the effect of the determinants of their purchasing behaviour and their conception of small ruminant farming sustainability. The results showed a need for further investments to improve the products diversification and technological advances. Most of the electrical power is generated by thermal conversion technology and all used packaging material are non recyclable, which poses an important threat to the supply chain on the environmental level. Convenience plays an important role in the consumers' purchasing behaviour. A sustainability-based labelling scheme would provide the consumers with a guarantee for the development of the supply chain sustainability.

Journal ArticleDOI
TL;DR: In this article, the authors defined a structured and functionalized support for future biomedical applications (model of low-density bioarray) by using stereolithography process with a special SU-8 photoresist and the reproducibility of the method was studied by analyzing the surface profile of the support.
Abstract: The main objective of this research was to define a structured and functionalized support for future biomedical applications (model of “low-density bioarray”). The experiments were carried out by using stereolithography process with a special SU-8 photoresist and the reproducibility of the method was studied by analyzing the surface profile of the support. Finally, a matrix of regular controlled sized wells was fabricated. Chemical reactions leading to covalent grafting were run to demonstrate that the inner surface of the wells remains still reactive after polymerization. The grafting of fluorophores with carboxylic functions activated by N-hydroxysuccinimide was studied as function of time, in order to determine the best reactions, conditions. Then, the grafting of two distinct fluorescent probes was led simultaneously inside the wells, showing the possibility of spatial localization of diverse reactions on the same support. The covalent and localized bindings were confirmed by fluorescence spectroscopy and microscopy analyses.

Posted Content
TL;DR: In this article, the authors extended the analysis of insurance contracts design to the case of low probability events, when there is a probability mass on the "no accident-zero loss"-event.
Abstract: This paper extends the analysis of insurance contracts design to the case of "low probability events", when there is a probability mass on the "no accident-zero loss"-event. The optimality of the deductible clause is discussed both at the theoretical and empirical levels.

Posted Content
TL;DR: In this article, the authors investigate the incidence of one component of this asymmetric predictive power, which has been examplified in experimental economics, and compare the predictions of this model regarding the influence of individual priors with those in the literature.
Abstract: There is evidence that asymmetric information does exist between litigants: not in a way supporting Bebchuk (1984)’s assumption that defendants’ degree of fault is private information, but more likely as a result of parties’ predictive capacity about the outcome at trial (Osborne, 1999). In this paper, we investigate the incidence of one component of this asymmetric predictive power, which has been examplified in experimental economics. We assume that litigants assess their priors on the plaintiff’s prevailing rate at trial in a way consistent with the self-serving bias, which is the source of the asymmetric information. We compare the predictions of this model regarding the influence of individual priors with those in the literature. Finally, we analyse the influence of another reason for probability distorsion, i.e. risk aversion in the sense of Yaari (1987).


Book ChapterDOI
TL;DR: In this article, powder and seldom single crystal diffraction is used to provide structural information about zeolites, and the resulting structure may be used subsequently as a starting point for molecular dynamics/Monte Carlo studies of (guest-host) systems that gives in their turn new insight on the structural hypotheses assumed during diffraction data refinement.
Abstract: Powder, and seldom single crystal diffraction, provides the first structural information about zeolites. However, because it is unable to take account of local defects or disorder, diffraction has to be supplemented ancillary information e.g. from local spectroscopies. The resulting structure may be used subsequently as a starting point for molecular dynamics/Monte Carlo studies of (guest-host) systems that gives in their turn new insight on the structural hypotheses assumed during diffraction data refinement.

Posted Content
TL;DR: In this article, a description of pseudo-euclidean Jordan K-algebras in terms of double extensions and generalized double extensions is given. And then, from one of these algesbras, a twelve dimension Lie algebra is constructed by the TKK construction.
Abstract: A Jordan algebra J is said to be pseudo-euclidean if J is endowed with an associative non-degenerate symmetric bilinear form B. B is said an associative scalar product on J. First, we provide a description of the pseudo-euclidean Jordan K-algebras in terms of double extensions and generalized double extensions. In particular, we shall use this description to construct all pseudo-euclidean Jordan algebras of dimension less than or equal to 5. And then, from one of these algebras, we shall construct a twelve dimension Lie algebra by the "TKK" construction. Second, a description of symplectic pseudo-euclidean Jordan algebras is provided and finally we describe a particular class of these algebras namely the class of symplectic Jordan-Manin Algebras. In addition to these descriptions, this paper demonstrates that these last two classes are identical and provides several information on the structure of pseudo-euclidean Jordan algebras.

Journal ArticleDOI
TL;DR: In this paper, the authors re-examine the function of credit rating agencies on the bond markets and propose a new model to analyze the relationship between rating announcements and bond spreads, based on unit root tests.
Abstract: The relation that may exist between rating announcements and bond spreads is unclear, despite the fact that many event studies have been dedicated to that problem. Several of them cast a doubt upon the utility and the function of credit rating agencies on the bond markets, which are usually supposed to transmit information about issuer default risks. This paper aims to re-examine the function of credit rating agencies on the bond markets. The first part of the paper presents the model while the second part is dedicated to an empirical study, relying on a new methodology, and covering the rating actions of Moody's in the euro zone during these last ten years.The theoretical model re-examines the function of the rating agencies in an oligopolistic environment, considering the existence of informed and not informed investors on the market, taking into account the different strategies of the agencies to acquire or maintain their credibility, to optimize their investigation effort, and their communication relative to the rating revisions. In the empirical part of the paper, a new methodology, relying on unit root tests, especially Perron (1997), rather than classical methodologies relying on the CARs, is implemented. It allows the split of the rating events between two main categories: the non informative and the informative events, characterized by a significant and long-lasting change in the relative spread surrounding the period of the rating change, which is denoted by a structural break in the series. Thus, the main hypotheses expressed in the theoretical model may be validated by the empirical analysis; this confirms the different functions of the agency, and a role depending on the market segment where the agency operates.

Posted Content
TL;DR: In this article, the authors model a criminal organization as an agency where the Principal and the Agent have different sensibilities towards the risk of arrestation and punishment, and at the same time have different skills with respect to general organization tasks, crime realization or detection avoidance activities (i.e. allowing to reduce the probability of detection).
Abstract: In this paper, we modelize a criminal organization as an agency where the Principal and the Agent have different sensibilities towards the risk of arrestation and punishment, and at the same time have different skills with respect to general organization tasks, crime realization or detection avoidance activities (i.e. allowing to reduce the probability of detection). In this set up, we first compare two regimes of exclusive sanctions (either the sanctions are borne by the Principal/beneficiary of the crime, or they are borne by the Agent/perpetrator of the crime), and we analyze the comparative efficiency of the various instruments which are at the disposal of public authorities to prevent corporation in criminal activities (frequency of control and level of monetary penalties). Finally, we study a case with joint liability.


01 Jan 2008
TL;DR: In this article, the authors provide a complete framework to aggregate different quantile and expectile models for reaching more diversified value-at-risk (VaR) and expected shortfall (ES) measures limiting the model risk.
Abstract: The objective of this paper is to provide a complete framework to aggregate dif- ferent quantile and expectile models for reaching more diversified VaR and ES measures limiting the model risk. Following Taylor (2008a and 2008b) and Gourieroux and Jasiak (2008), we present hereafter a new strategy which pro- vides estimations for both Value-at-Risk (VaR) and Expected Shortfall (ES). This approach involves the aggregation of quantile and expectile models and the use of Asymmetric Least Squares (ALS) regression, which corresponds to the least squares analogue of quantile regression. We also introduce a new class of models for the conditional VaR and ES modelling: Dynamic AutoRegressive Expectiles (DARE). Then, we first briefly present the main literature about VaR and ES estimations, and we then explain how expectiles can be used to estimate the VaR and ES in order to introduce the DARE approach. We finally present recent tests used to compare these different models (Candelon et al., 2008), also applied to the DARE Approach.

14 Mar 2008
TL;DR: In this article, the authors define l'adaptation des regles du droit criminel dans leur ensemble au phenomene delictuel and a ceux qui en sont les acteurs, and permet de mieux comprendre the ratio legis d'incriminations de plus en plus specialisees visant ou protegeant certaines categories de delinquants ou de victimes, the multiplication des acteur de la repression and le nombre de plus- en plus important de sanctions pouvant etre
Abstract: Le desordre legislatif des dernieres annees, souvent decrie avec raison, peut apparaitre comme la traduction d’un mouvement qui se fait jour, celui de l’individualisation du droit criminel. Defini comme l’adaptation des regles du droit criminel dans leur ensemble au phenomene delictuel et a ceux qui en sont les acteurs, il permet de mieux comprendre la ratio legis d’incriminations de plus en plus specialisees visant ou protegeant certaines categories de delinquants ou de victimes, la multiplication des acteurs de la repression et le nombre de plus en plus important de sanctions pouvant etre prononcees par le juge, jusqu’a l’avenement de la sanction-reparation. La realite de ce phenomene n’est pas contestable mais elle entraine des difficultes d’ordre technique et fait douter du nouveau sens attribue a la repression penale.

Posted Content
TL;DR: In this paper, the authors focus on how the story writers mentally represent their acts and apply the method of the status to the logic of creation; all of the life stories will be examined in first place to determine the mental representations favouring creation.
Abstract: The research carried out is based on 330 life stories. It focuses on how the story writers mentally represent their acts. Therefore, it first highlights mental representation, a concept which has been introduced for the last 120 years in the sciences of management as a key point in decision making and action process. The study then summarises the method of the status that brings out the representations one can have about a single situation. The method of the status will eventually be applied to the logic of creation; all of the life stories will be examined in first place to determine the mental representations favouring creation. The life stories of the creators will then be focused on to discover the mental representations damaging the logic of creation, once started up.

31 Mar 2008
TL;DR: In this paper, an automate stochastique is proposed to evaluate the fiabilite dynamique and disponibilite of a systeme hybride, in which the authors define deux variables, i.e., le comportement de la variable du processus, l’une for decrire le temps and ainsi gerer la partie stochasticique du systeme.
Abstract: Instituto Tecnologico de Tehuacan Libramiento Instituto Tecnologico s/n, 75770 Tehuacan, Puebla, Mexico nicolae.brinzei@ens em.inpl-nancy.fr RESUME : En plus de l’hybridite naturelle (continu + evenements discrets) d’un systeme dynamique, il faut donc tenir compte du caractere stochastique du systeme impose par les defaillances des composants ou par les incertitudes sur la connaissance du systeme. Pour cette raison, nous avons impl emente un automate stochastique afin d’evaluer la fiabilite dynamique et disponibilite d’un systeme hybride puisque cette evaluation n’est possible que par simulation. L’automate stochastique prend en compte les differe nts modes de fonctionne ment du systeme et le passage de l’un a l’autre sur des evenements deterministes et stochastiques. Les premiers sont produits par franchissement de seuils de la variable con-tinue, les seconds sont produits par un generateur aleatoire en fonction de leurs lois de probabilites. Nous avons defini deux variables pour l’automate, l’une pour decrire le comportement de la variable du processus et l’autre pour decrire le temps et ainsi gerer la partie stochastique du systeme. L’automate nous permet acceder aux grandeurs de surete de fonctionnement (fiabilite, disponibilite) lesquelles sont obtenues par statistique sur un grand nombre de simulations (Methode de Monte Carlo). MOTS-CLES : Automates stochastiques, fiabilite dynamique, disponibilite, simulation de systemes dynamiques hybrides. 1. INTRODUCTION Une caracteristique importante de nombreux systemes industriels est que leur comportement, comme par exemple la reponse a une perturbation, change en fonc-tion du temps en raison des interactions entre les compo-sants de ce systeme ou avec l’environnement (Siu, 1994). Chaque comportement donne du systeme est de-fini par les lois de la physique qui lui sont propres; le passage d’un comportement a un autre peut etre du a plusieurs causes : l’intervention humaine, l’action de l’organe de controle agissant sous l’influence des va-riables physiques qui decriven t l’etat du systeme (detec-tion d’une alarme ), une discontinuite propre au sys-teme (diode dans un circuit, couplage intermittent ) ou encore une defaillance de composant (auquel cas le sys-teme peut lui-meme etre dans un etat de defaillance). En plus de l’hybridite (continu + evenements discrets), il faut donc tenir compte du caractere stochastique du sys-teme impose par les defaillances des composants ou par les incertitudes sur la connaissance du systeme. Pour cette raison, actuellement, il est tres important de pou-voir evaluer la fiabilite des systemes dynamiques hy-brides. La fiabilite dynamique doit prendre en compte les inte-ractions entre le comportement dynamique et determi-niste d’un systeme et le comportement stochastique de ces composants. La complexite mathematique pour la calculer nous amene a recourir a la simulation. Pour cette raison, dans ce papier, nous presentons un automate sto-chastique a etats finis. Nous l’avons implemente pour evaluer la fiabilite et disponibilite d’un systeme dyna-mique hybride afin de tenir en compte les interactions entre les composants et la variable du processus et aussi l’interaction avec les defaillances des composants du systeme. En plus, nous presentons les resultats de la mo-delisation et la simulation de ce systeme geree par l’automate stochastique. Pour obtenir les grandeurs de surete de fonctionnement (la fiabilite et la disponibilite) nous avons applique une Simulation de Monte Carlo au modele. Nous avons considere le MTTF (mean time to failure), c’est-a-dire la duree moyenne de fonctionne-ment d’un composant avant la premiere defaillance (Vil-lemeur, 1988), afin de donner une estimation en termes de temps de la fiabilite du systeme. L’importance de l’implementation de l’automate stochastique reside dans le fait qu’il prend en compte les differents modes de fonctionnement du systeme et le passage de l’un a l’autre sur des evenements deterministes et stochastiques dont les premiers sont produits par franchissement de seuils des variables continues et les seconds sont produits par un generateur aleatoire en fonction de leurs lois de pro-babilites. Pour cela, nous avons defini deux variables pour l’automate, l’une pour decrire le comportement physique du systeme et l’autre pour decrire le temps qui

27 May 2008
TL;DR: In this paper, a synthese des etudes sur la question de la programmation televisuelle dans le reseau hertzien francais is presented.
Abstract: La programmation occupe une place importante dans l'activite televisuelle car elle doit favoriser au mieux la rencontre entre les programmes et les publics. Si, au depart, son role se cantonnait a assurer la complementarite entre les differentes chaines publiques, la programmation fait desormais partie integrante de la strategie des chaines dans la guerre qu'elles se livrent. La fonction de programmateur prend ainsi toute son importance dans l'organigramme des chaines de television, meme si cette activite est souvent le fruit de concertations entre les differents services de la chaine. La concurrence pregnante des chaines et leur volonte de mieux connaitre leurs publics ont egalement permis de mettre a la disposition des programmateurs un arsenal d'outils de plus en plus perfectionnes. L'auteur propose une synthese des etudes sur la question de la programmation televisuelle dans le reseau hertzien francais.

Journal Article
TL;DR: In this paper, a physical model of a fractured chalk block is compared with numerical modeling, which represents the tough process taking place when comparing an in situ rock mass with that obtained from numerical models and encourages us to improve the comparison process which allows the geomodel representativity to be repetitively analyzed (Which discontinuities are mechanically important?) and the mechanical model to be qualified (From which comparisons? On how many points? On which measurements?).
Abstract: A physical model of a fractured chalk block is compared with numerical modelling. This comparison represents the tough process taking place when comparing an in situ rock mass with that obtained from numerical models. After presenting the physical model and the procedure of its centrifugation, different geometrical and mechanical modelling running RESOBLOK and 3DEC software are tackled. The rupture mechanisms observed are quite well represented, although dimension problems appear in RESOBLOK modelling. This experiment encourages us to improve the comparison process which allows the geomodel representativity to be repetitively analysed (Which discontinuities are mechanically important?) and the mechanical model to be qualified (From which comparisons? On how many points? On which measurements?).

Posted ContentDOI
TL;DR: The authors assess the meaning of the controversies about the French and British central banks' solidarity over the bimetallic period (1850-1870) and highlight how historical case studies can become the instrument of a distorted economic view.
Abstract: In this article, we assess the meaning of the controversies about the French and British central banks’ solidarity over the bimetallic period (1850-1870). Our purpose is to highlight how historical case studies can become the instrument of a distorted economic view. In the main stream literature, the argument of the discount rates correlation is turned into rivalry between the two issuing institutions. This view omits the reading of Bimetallism as a coordinated discount rate policy of the French and British central banks and supports bimetallism as a self-equilibrating system.

01 Jan 2008
TL;DR: The discretization in these cut-cells should be designed such that: (a) the overall accuracy of the method is not severely diminished and (b) the high computational efficiency of the structured solver is preserved.
Abstract: Much attention has recently been devoted to the extension of Cartesian grid flow solvers to complex geometries by immersed boundary (IB) methods (see [4, 6] for recent reviews). In these methods, the irregular boundary is not aligned with the computational grid, and the treatment of the cells which are cut by the boundary remains an important issue. Indeed, the discretization in these cut-cells should be designed such that : (a) the overall accuracy of the method is not severely diminished and (b) the high computational efficiency of the structured solver is preserved.