scispace - formally typeset
Search or ask a question

Showing papers by "École normale supérieure de Cachan published in 2002"


Journal ArticleDOI
TL;DR: An overview of the CASL design is given, and all the main concepts and constructs of CASL are briefly explained and illustrated -- the reader is referred to the CASl Language Summary for further details.

233 citations


Journal ArticleDOI
TL;DR: In this paper, the Navier-Stokes equations with periodic boundary conditions perturbed by a space-time white noise were studied and a stationary martingale solution was constructed.

222 citations


Journal ArticleDOI
TL;DR: For constant linear systems, integral reconstructors and generalized proportional-integral controllers are introduced, which permit to bypass the derivative term in the classic PID controllers and more generally the usual asymptotic observers.
Abstract: For constant linear systems we are introducing integral reconstructors and generalized proportional-integral controllers , which permit to bypass the derivative term in the classic PID controllers and more generally the usual asymptotic observers. Our approach, which is mainly of algebraic flavour, is based on the module-theoretic framework for linear systems and on operational calculus in Mikusinski's setting. Several examples are discussed.

213 citations


Journal ArticleDOI
TL;DR: The application of the eXtended finite element method (X-FEM) to thermal problems with moving heat sources and phase boundaries is presented and the ability of the method to capture the highly localized, transient solution in the vicinity of a heat source or material interface is presented.
Abstract: The application of the eXtended finite element method (X-FEM) to thermal problems with moving heat sources and phase boundaries is presented. Of particular interest is the ability of the method to capture the highly localized, transient solution in the vicinity of a heat source or material interface. This is effected through the use of a time-dependent basis formed from the union of traditional shape functions with a set of evolving enrichment functions. The enrichment is constructed through the partition of unity framework, so that the system of equations remains sparse and the resulting approximation is conforming. In this manner, local solutions and arbitrary discontinuities that cannot be represented by the standard shape functions are captured with the enrichment functions. A standard time-projection algorithm is employed to account for the time-dependence of the enrichment, and an iterative strategy is adopted to satisfy local interface conditions. The separation of the approximation into classical shape functions that remain fixed in time and the evolving enrichment leads to a very efficient solution strategy. The robustness and utility of the method is demonstrated with several benchmark problems involving moving heat sources and phase transformations.

141 citations


Journal ArticleDOI
TL;DR: This paper generalizes the explicit/implicit time-integration algorithms pioneered by Belytschko, Hughes, and the FETI domain decomposition methods to the case where the same Newmark scheme, but different β and γ coefficients and different time-steps, are specified in each subdomain.

137 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a model aimed at describing three failure modes of a concrete structure subjected to an explosion, based on visco-plasticity and rate dependent damage in which a homogenization method is used in order to include the variation of the material porosity due to compaction.
Abstract: In a concrete structure subjected to an explosion, for example a concrete slab, the material is subjected to various states of stress which lead to many modes of rupture. Closer to the explosive, a state of strong hydrostatic compression is observed. This state of stress produces an irreversible compaction of the material. Away from the zone of explosion, confinement decreases and the material undergoes compression with a state of stress, which is slightly triaxial. Finally, the compression wave can be reflected on a free surface and becomes a tensile wave, which by interaction with the compression wave, produces scabbing. We present, in this paper, a model aimed at describing these three failure modes. It is based on visco-plasticity and rate dependent damage in which a homogenization method is used in order to include the variation of the material porosity due to compaction. The model predictions are compared with several experiments performed on the same concrete. Computations of split Hopkinson tests on confined concrete, a tensile test with scabbing, and an explosion on a concrete slab are presented.

120 citations


Journal ArticleDOI
TL;DR: In this paper, the authors re-examine geometrically exact models for structures, such as beams, shells or solids with independent rotation field, with respect to invariance under superposed rigid body motion.

103 citations



Journal ArticleDOI
TL;DR: It is proposed that the dimeric core recognizes specific motifs, with the second interface being critical for their correct head to tail assembly and the view that the same interacting interfaces are also involved on the DNA.

95 citations


Journal ArticleDOI
TL;DR: In this paper, a rank-dependent preference functional is obtained in this set-up when the independence axiom is weakened to stochastic dominance and a probability trade-off consistency condition.
Abstract: This paper uses “revealed probability trade-offs” to provide a natural foundation for probability weighting in the famous von Neumann and Morgenstern axiomatic set-up for expected utility. In particular, it shows that a rank-dependent preference functional is obtained in this set-up when the independence axiom is weakened to stochastic dominance and a probability trade-off consistency condition. In contrast with the existing axiomatizations of rank-dependent utility, the resulting axioms allow for complete flexibility regarding the outcome space. Consequently, a parameter-free test/elicitation of rank-dependent utility becomes possible. The probability-oriented approach of this paper also provides theoretical foundations for probabilistic attitudes towards risk. It is shown that the preference conditions that characterize the shape of the probability weighting function can be derived from simple probability trade-off conditions.

92 citations


Journal ArticleDOI
TL;DR: In this paper, an extension of time-integration energy conserving scheme is presented, which introduces desirable properties of controllable energy decay, as well as numerical dissipation of high-frequency contribution to total response.

Journal ArticleDOI
01 Jan 2002
Abstract: We first review some recent and current research works attributing to a very significant progress on shell problem theoretical foundation and numerical implementation attained over a period of the last several years. We then discuss theoretical formulations of shell model accounting for the trough-the-thickness stretching, which allows for large deformations and direct use of 3d constitutive equations. Three different possibilities for implementing this model within the framework of the finite element method are examined, one leading to 7 nodal parameters and the remaining two to 6 nodal parameters. Comparisons are performed of the 7- parameter shell model with no simplification of kinematic terms and 7-parameter shell model which exploits usual simplifications of Green-Lagrange strain measures. Comparisons are also presented of two different ways of implementing the incompatible mode method for reducing the number of shell model parameters to 6. One implementation uses additive decomposition of the strains and the other additive decomposition of the deformation gradient. Several numerical examples are given to illustrate performance of the shell elements developed herein.

Journal ArticleDOI
TL;DR: In this article, a damage model that can be used in the whole range of loadings (from quasi-static to dynamic ones) is developed, and qualitative and quantitative validations are given by using a real-time visualization configuration for analyzing the degradation kinetics during impact and a moire technique to measure the strains in a ceramic tile during impact.
Abstract: Modeling dynamic fragmentation of brittle materials usually implies to choose between a discrete description of the number of fragments and a continuum approach with damage variables. A damage model that can be used in the whole range of loadings (from quasi-static to dynamic ones) is developed. The deterministic or probabilistic nature of fragmentation is discussed. Qualitative and quantitative validations are given by using a real-time visualization configuration for analyzing the degradation kinetics during impact and a moire technique to measure the strains in a ceramic tile during impact. Finally, a closed-form solution of the change of the number of broken defects with the applied stress gives a way of optimizing the microstructure of ceramics for armor applications.

Journal ArticleDOI
TL;DR: The Mandel parameter Q(T) is measured over 4 orders of magnitude of observation time scale T by recording every photocount and shows sub-Poissonian statistics is clearly observed and the probablility of two-photons events is 10 times smaller than Poissonian pulses.
Abstract: We studied intensity fluctuations of a single photon source relying on the pulsed excitation of the fluorescence of a single molecule at room temperature. We directly measured the Mandel parameter Q(T) over 4 orders of magnitude of observation time scale T by recording every photocount. On time scale of a few excitation periods, sub-Poissonian statistics is clearly observed and the probablility of two-photons events is 10 times smaller than Poissonian pulses. On longer times, blinking in the fluorescence, due to the molecular triplet state, produces an excess of noise.

Journal ArticleDOI
TL;DR: An experimental procedure is proposed and a mathematical proof that the algorithmic procedure is intrinsic, i.e. does not depend asymptotically upon the quantization mesh used for the topographic map, and it is proved its contrast invariance.
Abstract: Most image analysis algorithms are defined for the grey level channel, particularly when geometric information is looked for in the digital image. We propose an experimental procedure in order to decide whether this attitude is sound or not. We test the hypothesis that the essential geometric contents of an image is contained in its level lines. The set of all level lines, or topographic map, is a complete contrast invariant image description: it yields a line structure by far more complete than any edge description, since we can fully reconstruct the image from it, up to a local contrast change. We then design an algorithm constraining the color channels of a given image to have the same geometry (i.e. the same level lines) as the grey level. If the assumption that the essential geometrical information is contained in the grey level is sound, then this algorithm should not alter the colors of the image or its visual aspect. We display several experiments confirming this hypothesis. Conversely, we also show the effect of imposing the color of an image to the topographic map of another one: it results, in a striking way, in the dominance of grey level and the fading of a color deprived of its geometry. We finally give a mathematical proof that the algorithmic procedure is intrinsic, i.e. does not depend asymptotically upon the quantization mesh used for the topographic map. We also prove its contrast invariance.

Journal ArticleDOI
TL;DR: In this paper, the precise structure of second derivatives of functions whose argument is a variable subset of a regular domain has been derived for Frechet derivatives in adequate Banach spaces, where the starting point is a functional analytic statement that small regular perturbations of a given regular domain may be uniquely represented through normal deformations of the boundary of this domain.
Abstract: In this paper, we describe the precise structure of second "shape derivatives", that is derivatives of functions whose argument is a variable subset of \( \mathbb{R}^N \). This is done for Frechet derivatives in adequate Banach spaces. Besides the structure itself, interest lies in the way it is derived: the starting point is a "functional analytic" statement of the well-known fact that small regular perturbations of a given regular domain may be "uniquely" represented through normal deformations of the boundary of this domain. The approach involves the implicit function theorem in a convenient functional space. A consequence of this "normal representation" property is that any shape functional may be described through a functional depending on functions defined only on the boundary of the given domain. Differentiating twice this representation leads to the structure theorem. We recover the fact that, at critical shapes, the second derivative around the given domain depends only on the normal component of the deformation vector-field at its boundary. Some examples are explicitly computed.

Journal ArticleDOI
TL;DR: In this article, the authors used lattice measurements of the cubic austenite and the monoclinic martensite cells to determine the nature of the phase transformation, i.e. an exact interface between the parent phase and an untwinned martensites variant.
Abstract: Biaxial proportional loading such as tension (compression)–internal pressure and bi-compression tests are performed on a Cu-Zn-Al and Cu-Al-Be shape memory polycrystals. These tests lead to the experimental determination of the initial surface of phase transformation (austenite→martensite) in the principal stress space (σ1,σ2). A first “micro–macro” modeling is performed as follows. Lattice measurements of the cubic austenite and the monoclinic martensite cells are used to determine the “nature” of the phase transformation, i.e. an exact interface between the parent phase and an untwinned martensite variant. The yield surface is obtained by a simple (Sachs constant stress) averaging procedure assuming random texture. A second modeling, performed in the context of the thermodynamics of irreversible processes, consists of a phenomenological approach at the scale of the polycrystal. These two models fit the experimental phase transformation surface well.

Journal ArticleDOI
TL;DR: Using the Kohonen neural network, the electrostatic potentials on the molecular surfaces of 14 styrylquinoline derivatives were drawn as comparative two-dimensional maps and used to predict the activity of 10 new compounds while the experimental data were unknown and turned out to agree with the predictions.
Abstract: Using the Kohonen neural network, the electrostatic potentials on the molecular surfaces of 14 styrylquinoline derivatives were drawn as comparative two-dimensional maps and compared with their known human immunodeficiency virus (HIV)-1 replication blocking potency in cells. A feature of the potential map was discovered to be related with the HIV-1 blocking activity and was used to unmask the activity of further five analogues, previously described but whose cytotoxicity precluded an estimation of their activity, and to predict the activity of 10 new compounds while the experimental data were unknown. The measurements performed later turned out to agree with the predictions.

Journal ArticleDOI
Abstract: We investigate the influence of a random perturbation of white noise type on the finite time blow up of solutions of a focusing supercritical nonlinear Schrodinger equation. We prove that, contrary to the deterministic case, any initial data gives birth to a solution which develops singularities. Moreover, the singularities appear immediately. We use a stochastic generalization of the variance identity and a control argument.

Journal ArticleDOI
TL;DR: In this article, a Korn-like inequality for a vector field in a bounded open set of, satisfying a tangency boundary condition was shown to hold true if and only if the domain is not axisymmetric.
Abstract: We state and prove a Korn-like inequality for a vector field in a bounded open set of , satisfying a tangency boundary condition. This inequality, which is crucial in our study of the trend towards equilibrium for dilute gases, holds true if and only if the domain is not axisymmetric. We give quantitative, explicit estimates on how the departure from axisymmetry affects the constants; a Monge–Kantorovich minimization problem naturally arises in this process. Variants in the axisymmetric case are briefly discussed.

Journal ArticleDOI
TL;DR: In this paper, a weakly nonlinear analysis of the Froude number, which is the ratio of the upstream uniform velocity to the critical speed of shallow water waves, is presented.
Abstract: Nonlinear waves in a forced channel flow are considered The forcing is due to a bottom obstruction The study is restricted to steady flows A weakly nonlinear analysis shows that for a given obstruction, there are two important values of the Froude number, which is the ratio of the upstream uniform velocity to the critical speed of shallow water waves, FC>1 and FL FC, there are two symmetric solitary waves sustained over the site of forcing, and at F=FC the two solitary waves merge into one; (iv) when F>FC, there is also a one-parameter family of solutions matching the upstream (supercritical) uniform flow with a cnoidal wave downstream; (v) for a particular value of F>FC, the downstream wave can be eliminated and the solution becomes a reversed hydraulic fall (it is the same as solution (ii), except that the flow is reversed!) Flows of type (iv), including the hydraulic fall (v) as a special case, are computed here using the full Euler equations The problem is solved numerically by a boundary-integral-equation method due to Forbes and Schwartz It is confirmed that there is a three-parameter family of solutions with a train of waves downstream The three parameters can be chosen as the Froude number, the obstruction size and the wavelength of the downstream waves This three-parameter family differs from the classical two-parameter family of subcritical flows (i) but includes as a particular case the hydraulic falls (ii) or equivalently (v) computed by Forbes

Journal ArticleDOI
TL;DR: In this paper, an eight-node brick element is proposed to adjust to the physical situation automatically in the sense that the stabilizing forces are chosen to be proportional to the mean tangent modulus of the material across the thickness.

Journal ArticleDOI
TL;DR: In this article, the authors investigated what drives and impedes Net customers to interact with a marketer's websites and, more specifically, with decision support interface systems (DSISs) employed in these sites to assist users when shopping or searching for high-involvement consumer goods.
Abstract: The understanding of consumer interaction with online EC Websites is one of the big current challenges for online marketers. The present paper investigates what drives and impedes Net customers to interact with a marketer's websites and, more specifically, with decision support interface systems (DSISs) employed in these sites to assist users when shopping or searching for high-involvement consumer goods. DSIS are considered as a predecessor or partially already integrator of more elaborated agent systems that may be used in future EC environments. It is shown that the design of today's DSISs is sub-optimal for both marketers and consumers, because it fails to motivate user interaction. One reason for this might be that little effort has been made to transfer the insights from consumer behavior to the design of EC user interfaces. The current paper aims to address this gap by proposing a number of new DSIS design principles, all of which are based on insights from traditional marketing search- and perceived risk theory. The main contribution of this paper is that it proposes a design approach for EC Websites that intuitively “makes the user model available to the user” [Shneiderman and Maes, 36].

Journal ArticleDOI
TL;DR: In this paper, the authors consider the effect of monetary union in a model with a significant role for financial market imperfections and introduce a financial accelerator into a stochastic general equilibrium macro model of a two country economy.
Abstract: In this paper, we consider the effect of a monetary union in a model with a significant role for financial market imperfections We do so by introducing a financial accelerator into a stochastic general equilibrium macro model of a two country economy We show that financial market imperfections introduce important cross-country transmission mechanisms to asymmetric shocks to supply and demand Within this framework, we study the likely costs and benefits of monetary union We also consider the effects of cross-country heterogeneity in financial markets Both the presence of financial frictions and the use of a single currency have significant impacts on the international propagation of exogenous shocks The introduction of asymmetries in the financial contract widens the differences in cyclical behavior of national economies in a monetary union, but financial integration compensates the loss of policy instruments

Journal ArticleDOI
TL;DR: In this paper, a general method for coupling non-matching linear finite element meshes in transient dynamic analysis is described, based on Schur's dual formulation whose main advantage is to provide equilibrium as well as kinematic continuity throughout the interface.
Abstract: This paper describes a general method for coupling non-matching linear finite element meshes in transient dynamic analysis. We propose a method based on Schur's dual formulation whose main advantage is to provide equilibrium as well as kinematic continuity throughout the interface. The essence of our work lies in the particular discretization of the space of Lagrange multipliers and in the validation of the method through two- and three-dimensional static calculations as well as two-dimensional dynamic calculations. An example is also presented and the results are compared to those of the mortar method. Copyright © 2002 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: It is observed that modified coumarin dimmers containing hydrophobic moiety on the linker display potent inhibitory activities against HIV-1 integrase.
Abstract: A systematic series of chemically modified coumarin dimmers has been synthesized and tested for their inhibitory activity against HIV-1 integrase. We observed that modified coumarin dimmers containing hydrophobic moiety on the linker display potent inhibitory activities.

Journal ArticleDOI
TL;DR: In plane parallel arrangement and enhancement of NLO-activity are observed upon coordination of heteroditopic dipoles containing a phosphole ring on square-planar d8-palladium centre.

Proceedings ArticleDOI
13 May 2002
TL;DR: The problem of establishing lower bounds for the estimation of deterministic parameters by means of a constrained optimization problem is revisited and lower bounds (Cramer-Rao, Barankin, Battacharyya) can be easily obtained as the result of an optimization by impozing the bias of the estimator.
Abstract: We have revisited and solved the problem of establishing lower bounds for the estimation of deterministic parameters by means of a constrained optimization problem. We show that these various bounds (Cramer-Rao, Barankin, Battacharyya) can be easily obtained as the result of an optimization by impozing the bias of the estimator. Simulations results are presented in spectral analysis.

Journal ArticleDOI
TL;DR: This work discusses the relations of Masnou's filter with other classes of connected operators introduced in the literature and displays some experiments to show the main properties of the filters discussed above.
Abstract: Motivated by operators simplifying the topographic map of a function, we study the theoretical properties of two kinds of “grain” filters. The first category, discovered by L. Vincent, defines grains as connected components of level sets and removes those of small area. This category is composed of two filters, the maxima filter and the minima filter. However, they do not commute. The second kind of filter, introduced by Masnou, works on “shapes”, which are based on connected components of level sets. This filter has the additional property that it acts in the same manner on upper and lower level sets, that is, it commutes with an inversion of contrast. We discuss the relations of Masnou's filter with other classes of connected operators introduced in the literature. We display some experiments to show the main properties of the filters discussed above and compare them.

Journal ArticleDOI
TL;DR: The absolutely minimizing Lipschitz extension (AMLE) model is singled out as the simplest interpolation method satisfying a set of natural requirements and a maximum principle is proven, which guarantees not to introduce unnatural oscillations which is a major problem with many classical methods.
Abstract: Interpolation of digital elevation models becomes necessary in many situations, for instance, when constructing them from contour lines (available eg, from nondigital cartography), or from disparity maps based on pairs of stereoscopic views, which often leaves large areas where point correspondences cannot be found reliably The absolutely minimizing Lipschitz extension (AMLE) model is singled out as the simplest interpolation method satisfying a set of natural requirements In particular, a maximum principle is proven, which guarantees not to introduce unnatural oscillations which is a major problem with many classical methods The authors then discuss the links between the AMLE and other existing methods In particular, they show its relation with geodesic distance transformation They also relate the AMLE to the thin-plate method, that can be obtained by a prolongation of the axiomatic arguments leading to the AMLE, and addresses the major disadvantage of the AMLE model, namely its inability to interpolate slopes as it does for values Nevertheless, in order to interpolate slopes, they have to give up the maximum principle and authorize the appearance of oscillations They also discuss the possible link between the AMLE and Kriging methods that are the most widely used in the geoscience literature