scispace - formally typeset
Search or ask a question

Showing papers by "Paris Dauphine University published in 2001"


Journal ArticleDOI
TL;DR: This paper shows for sample ALE schemes that satisfying the corresponding DGCL is a necessary and sufficient condition for a numerical scheme to preserve the nonlinear stability of its fixed grid counterpart.

357 citations


Journal ArticleDOI
01 Aug 2001
TL;DR: This paper introduces two generalisations of the rough sets theory that introduce the use of a non symmetric similarity relation in order to formalise the idea of absent value semantics and shows that for the valued tolerance approach it is possible to obtain more informative approximations and decision rules.
Abstract: The rough set theory, based on the original definition of the indiscernibility relation, is not useful for analysing incomplete information tables where some values of attributes are unknown. In this paper we distinguish two different semantics for incomplete information: the "missing value" semantics and the "absent value" semantics. The already known approaches, e.g. based on the tolerance relations, deal with the missing value case. We introduce two generalisations of the rough sets theory to handle these situations. The first generalisation introduces the use of a non symmetric similarity relation in order to formalise the idea of absent value semantics. The second proposal is based on the use of valued tolerance relations. A logical analysis and the computational experiments show that for the valued tolerance approach it is possible to obtain more informative approximations and decision rules than using the approach based on the simple tolerance relation.

354 citations


Journal ArticleDOI
TL;DR: Results showed that this tool is able to infer weights that restores in a stable way the assignment examples and that it was able to identify “inconsistencies” in the assignmentExamples.

230 citations


Journal ArticleDOI
TL;DR: In this paper, a simple dynamic model dealing with the management of a marine renewable resource is presented, where instead of studying the ecological and economic interactions in terms of equilibrium or optimal control, the authors pay much attention to the viability of the system or, in a symmetric way, to crisis situations.

204 citations


Journal ArticleDOI
TL;DR: In this paper, an attempt to analyse the geographical agglomeration of corporate research and development activities through the use of a desegregated sectorial approach was made, focusing on those interfaces that are critical for the organisation of innovation-related activities, as well as on the degree of complexity of the knowledge base which is being mobilised.

127 citations


Journal ArticleDOI
TL;DR: In this paper, the Hartree-Fock model was used to prove the existence of the thermodynamic limit for the energy per unit volume in the reduced Hartree Fock model.
Abstract: We continue here our study [10–13] of the thermodynamic limit for various models of Quantum Chemistry, this time focusing on the Hartree–Fock type models. For the reduced Hartree–Fock models, we prove the existence of the thermodynamic limit for the energy per unit volume. We also define a periodic problem associated to the Hartree–Fock model, and prove that it is well-posed.

124 citations


Journal ArticleDOI
TL;DR: This paper provides a characterization of viability kernels and capture basins of a target viable in a constrained subset as a unique closed subset between the target and the constrained subset satisfying tangential conditions or, by duality, normal conditions.
Abstract: This paper provides a characterization of viability kernels and capture basins of a target viable in a constrained subset as a unique closed subset between the target and the constrained subset satisfying tangential conditions or, by duality, normal conditions. It is based on a method devised by Helene Frankowska for characterizing the value function of an optimal control problem as generalized (contingent or viscosity) solutions to Hamilton--Jacobi equations. These abstract results, interesting by themselves, can be applied to epigraphs of functions or graphs of maps and happen to be very efficient for solving other problems, such as stopping time problems, dynamical games, boundary-value problems for systems of partial differential equations, and impulse and hybrid control systems, which are the topics of other companion papers.

106 citations


Journal ArticleDOI
TL;DR: A new segmentation method based on new transformations the authors introduced in mathematical morphology is presented, based on the search for a new class of regional maxima components of the image that correspond to the regions inside the drusen.
Abstract: Segmentation of bright blobs in an image is an important problem in computer vision and particularly in biomedical imaging. In retinal angiography, segmentation of drusen, a yellowish deposit located on the retina, is a serious challenge in proper diagnosis and prevention of further complications. Drusen extraction using classic segmentation methods does not lead to good results. We present a new segmentation method based on new transformations we introduced in mathematical morphology. It is based on the search for a new class of regional maxima components of the image. These maxima correspond to the regions inside the drusen. We present experimental results for drusen extraction using images containing examples having different types and shapes of drusen. We also apply our segmentation technique to two important cases of dynamic sequences of drusen images. The first case is for tracking the average gray level of a particular drusen in a sequence of angiographic images during a fluorescein exam. The second case is for registration and matching of two angiographic images from widely spaced exams in order to characterize the evolution of drusen.

96 citations


Journal ArticleDOI
TL;DR: In this article, the short-term effect of store-level promotions (weekly flyers, radio and outdoor advertising) on grocery store choice is investigated, and the effect of individual variables (involvement toward shopping, attitude toward the purchase of products on promotion, search for promotional information) is clearly validated.

89 citations


Journal ArticleDOI
TL;DR: In this paper, the uniqueness of mild solutions and very weak solutions of Navier-Stokes equations in C([0,T); LN (Ω)), where Ω is the whole space R N, a regular domain of R N or the torus T N with...
Abstract: We prove the uniqueness of mild solutions and very weak solutions of the Navier-Stokes equations in C([0,T); LN (Ω)), where Ω is the whole space R N , a regular domain of R N or the torus T N with ...

86 citations


Journal ArticleDOI
TL;DR: The capability of the approach to close contours with examples on various images of sets of edge points of shapes with missing contours is illustrated.
Abstract: We address the problem of finding a set of contour curves in an image. We consider the problem of perceptual grouping and contour completion, where the data is a set of points in the image. A new method to find complete curves from a set of contours or edge points is presented. Our approach is based on a previous work on finding contours as minimal paths between two end points using the fast marching algorithm (L. D Cohen and R. Kimmel, International Journal of Computer Vision, Vol. 24, No. 1, pp. 57–78, 1997). Given a set of key points, we find the pairs of points that have to be linked and the paths that join them. We use the saddle points of the minimal action map. The paths are obtained by backpropagation from the saddle points to both points of each pair. In a second part, we propose a scheme that does not need key points for initialization. A set of key points is automatically selected from a larger set of admissible points. At the same time, saddle points between pairs of key points are extracted. Next, paths are drawn on the image and give the minimal paths between selected pairs of points. The set of minimal paths completes the initial set of contours and allows to close them. We illustrate the capability of our approach to close contours with examples on various images of sets of edge points of shapes with missing contours.

Journal ArticleDOI
TL;DR: An extensive experimental study on several well‐known data sets was performed where two different approaches were compared: the popular rough set based rule induction algorithm LEM2 generating classification rules, and the own algorithm Explore—specific for discovery perspective.
Abstract: This paper discusses induction of decision rules from data tables representing information about a set of objects described by a set of attributes. If the input data contains inconsistencies, rough sets theory can be used to handle them. The most popular perspectives of rule induction are classification and knowledge discovery. The evaluation of decision rules is quite different depending on the perspective. Criteria for evaluating the quality of a set of rules are presented and discussed. The degree of conflict and the possibility of achieving a satisfying compromise between criteria relevant to classification and criteria relevant to discovery are then analyzed. For this purpose, we performed an extensive experimental study on several well-known data sets where we compared two different approaches: (1) the popular rough set based rule induction algorithm LEM2 generating classification rules, (2) our own algorithm Explore - specific for discovery perspective.

Journal ArticleDOI
TL;DR: This paper provides a mathematical framework for iterated translation-invariant wavelet shrinkage, and shows that with orthogonal wavelets it is equivalent to gradient descent in L (2)(I) along the semi-norm for the Besov space B(1) (1)(L(1)(I)), which, in turn, can be interpreted as a new nonlinear wavelet-based image smoothing scale space.
Abstract: Coifman and Donoho (1995) suggested translation-invariant wavelet shrinkage as a way to remove noise from images. Basically, their technique applies wavelet shrinkage to a two-dimensional (2-D) version of the semi-discrete wavelet representation of Mallat and Zhong (1992), Coifman and Donoho also showed how the method could be implemented in O(Nlog N) operations, where there are N pixels. In this paper, we provide a mathematical framework for iterated translation-invariant wavelet shrinkage, and show, using a theorem of Kato and Masuda (1978), that with orthogonal wavelets it is equivalent to gradient descent in L/sub 2/(I) along the semi-norm for the Besov space B/sub 1//sup 1/(L/sub 1/(I)), which, in turn, can be interpreted as a new nonlinear wavelet-based image smoothing scale space. Unlike many other scale spaces, the characterization is not in terms of a nonlinear partial differential equation.

Journal ArticleDOI
TL;DR: This work addresses the theoretical problems of optical flow estimation and image registration in a multi-scale framework in any dimension by introducing a local rigidity hypothesis on the unknown deformation and deduce a new natural motion constraint equation (MCE) at each scale using the Dirichlet low pass operator.
Abstract: We address the theoretical problems of optical flow estimation and image registration in a multi-scale framework in any dimension. Much work has been done based on the minimization of a distance between a first image and a second image after applying deformation or motion field. Usually no justification is given about convergence of the algorithm used. We start by showing, in the translation case, that convergence to the global minimum is made easier by applying a low pass filter to the images hence making the energy “convex enough”. In order to keep convergence to the global minimum in the general case, we introduce a local rigidity hypothesis on the unknown deformation. We then deduce a new natural motion constraint equation (MCE) at each scale using the Dirichlet low pass operator. This transforms the problem to solving the energy minimization in a finite dimensional subspace of approximation obtained through Fourier Decomposition. This allows us to derive sufficient conditions for convergence of a new multi-scale and iterative motion estimation/registration scheme towards a global minimum of the usual nonlinear energy instead of a local minimum as did all previous methods. Although some of the sufficient conditions cannot always be fulfilled because of the absence of the necessary a priori knowledge on the motion, we use an implicit approach. We illustrate our method by showing results on synthetic and real examples in dimension 1 (signal matching, Stereo) and 2 (Motion, Registration, Morphing), including large deformation experiments.

Journal ArticleDOI
TL;DR: In this article, the existence of at least two nontrivial homoclinic orbits for a class of second order autonomous Hamiltonian systems was obtained by a new variational method based on the relative category.
Abstract: In this paper, we obtain the existence of at least two nontrivial homoclinic orbits for a class of second order autonomous Hamiltonian systems. This multiplicity result is obtained by a new variational method based on the relative category: to overcome the lack of compactness of the problem, we first solve perturbed nonautonomous problems and study the limit of the solutions as the nonautonomous perturbation goes to 0. This method allows to get rid of some assumptions on the potential used in the work of Ambrosetti and Coti-Zelati.

Posted Content
01 Jan 2001
TL;DR: In this paper, the authors analyse the various definitions of models and propose a synthesis of the functions a model can handle, and then the concept of "implementation" is defined, and progressively shift from a traditional "design then implementation" standpoint to a more general theory of a model design/implementation, seen as a cross-construction process between the model and the organisation in which it is implemented.
Abstract: “Why are so many models designed and so few used” is a question often discussed within the Operational Research (OR) community. The formulation of the question seems simple, but the concepts and theories that must be mobilised to give it an answer are far more sophisticated. Would there be a selection process from “many models designed” to “few models used” and, if so, which particular properties do the “happy few” have? This paper first analyses the various definitions of “models” presented is the OR literature and proposes a synthesis of the functions a model can handle. Then, the concept of “implementation” is defined, and we progressively shift from a traditional “design then implementation” standpoint to a more general theory of a model design/implementation, seen as a cross-construction process between the model and the organisation in which it is implemented. The organisation is, consequently, considered not as a simple context but as an active component in the design of models. This leads to logically propose six models of model implementation: the technocratic model, the political model, the managerial model, the self-learning model, the conquest model and the experimental model.

Journal ArticleDOI
TL;DR: This paper first analyses the various definitions of “models” presented is the OR literature and proposes a synthesis of the functions a model can handle, and progressively shifts from a traditional “design then implementation” standpoint to a more general theory of a model design/implementation.

Journal ArticleDOI
TL;DR: In this article, tax credit policy in the French bio-fuel industry producing ethanol and esters is determined using mathematical programming and multiple criteria procedures, and the best compromise solution corresponds to tax exemptions of about 2 FF l −1 [FF: French Franc (1 equivalent to 6.559
Abstract: Decision making to determine government support policy for agro-energy industry can be assisted by mathematical programming and Multiple Criteria procedures. In this case study, tax credit policy in the French bio-fuel industry producing ethanol and esters is determined. Micro-economic models simulate the agricultural sector and the bio-fuel industry through multi-level mixed integer linear programming. Aggregate supply of energy crops at the national level is estimated using a staircase model of 450 individual farm sub-models specialising in arable cropping. The government acts as a leader, since bio-fuel chains depend on subsidies. The model provides rational responses of the industry, taking into account of the energy crops’ supply, to any public policy scheme (unitary tax exemptions for bio-fuels subject to budgetary constraints) as well as the performance of each response regarding total greenhouse gases emissions (GHG), budgetary expenditure and agents’ surpluses. Budgetary, environmental and social concerns will affect policy decisions, and a multi-criteria optimisation module projects the decision maker aims at the closest feasible compromise solutions. When public expenditure is the first priority, the best compromise solution corresponds to tax exemptions of about 2 FF l −1 [FF: French Franc (1 equivalent to 6.559 FF)] for ester and 3 FF l −1 for ethanol (current tax exemptions amount at 2.30 FF l −1 for ester and 3.30 FF l −1 for ethanol). On the other hand, a priority on the reduction of GHG emissions requires an increase of ester volume produced at the expense of ethanol production ( 2.30 FF l −1 for both ester and ethanol chains proposed by the model).

Journal ArticleDOI
TL;DR: In this article, the authors deal with the numerical analysis of some algorithms for the simulation of the interaction between a fluid and a structure in the case where the deformation of the structure induces an evolution in the fluid domain.
Abstract: This paper deals with the numerical analysis of some algorithms for the simulation of the interaction between a fluid and a structure in the case where the deformation of the structure induces an evolution in the fluid domain. This is a highly nonlinear problem and the algorithms of time decoupling are very numerous and not completely well understood. We study three of them for a one-dimensional representative problem. We prove that the considered algorithms are stable and we also prove that one of them is convergent.

Proceedings ArticleDOI
13 Jul 2001
TL;DR: A new method to find complete curves from a set of contours or edge points is presented, based on a previous work on finding contours as minimal paths between two end points using the fast marching algorithm.
Abstract: We present a new approach for finding a set of contour curves in an image. We consider the problem of perceptual grouping and contour completion, where the data is a set of points in the image. A new method to find complete curves from a set of contours or edge points is presented. Our approach is based on a previous work on finding contours as minimal paths between two end points using the fast marching algorithm (Cohen et al., 1997). Given a set of key points, we find the pairs of points that have to be linked. The paths that join them complete the initial set of contours and allow us to close them. In a second part, we propose a scheme that does not need key points for initialization. Key points are automatically selected from a larger set of admissible points. We illustrate the capability of our approach to close contours with synthetic examples.

Book ChapterDOI
TL;DR: The aim of the MAS is 1) to diagnose problems in the bus lines and 2) to detect inconsistency in positioning data sent by buses to the central operator.
Abstract: In this paper, a multi-agent system (MAS) for bus transportation management is presented. The aim of our MAS is 1) to diagnose problems in the bus lines (bus delays, bus advances,...) and 2) to detect inconsistency in positioning data sent by buses to the central operator. Our MAS behaves as a Multi-Agent Decision Support System (MADSS) used by human regulators in order to manage bus lines. In our model, buses and stops are modeled as autonomous agents that cooperate to detect faults (disturbances) in the transportation network. An original interaction model called ESAC (Environment as Active Communication Support) was designed to allow nonintentional as well as direct communication. The system was implemented using ILOG RULES and was tested on data coming from the Brussels bus transportation network (STIB).

Journal ArticleDOI
TL;DR: In this article, it was shown that if the sequence of reinitialized states of a run converges to some state, then the run converged to a cadenced run starting from this state, and that, under convexity assumptions, that a cadence run does exist.
Abstract: Impulse differential inclusions, and in particular, hybrid control systems, are defined by a differential inclusion (or a control system) and a reset map. A run of an impulse differential inclusion is defined by a sequence of cadences, of reinitialized states and of motives describing the evolution along a given cadence between two distinct consecutive impulse times, the value of a motive at the end of a cadence being reset as the next reinitialized state of the next cadence. A cadenced run is then defined by constant cadence, initial state and motive, where the value at the end of the cadence is reset at the same reinitialized state. It plays the role of a ‘discontinuous’ periodic solution of a differential inclusion. We prove that if the sequence of reinitialized states of a run converges to some state, then the run converges to a cadenced run starting from this state, and that, under convexity assumptions, that a cadenced run does exist. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work allows us to understand some of the problems generated by the application of the standards to COTS evaluations, and to propose new principles for evaluating software quality that should be considered in an evolution of the Standards.
Abstract: Industrial evaluations of COTS software largely used the quality models provided by the international standards. But the context and objectives of COTS evaluations are fundamentally different than those primarily defined by the standards. Several key issues are often forgotten: (1) the existence of several evaluators and several quality models sharing common factors, criteria and measures, (2) the purpose of the evaluation model, (3) measures of different types, and (4) the recursive nature of the model since each node is an evaluation model itself. We had the occasion to study the results of real standard-based COTS evaluations. Faced with the difficulties to exploit them, we experimented the use of multi-criteria methodology. This work allows us to understand some of the problems generated by the application of the standards to COTS evaluations, and to propose new principles for evaluating software quality that should be considered in an evolution of the standards. This paper reports our experiment.

Journal ArticleDOI
TL;DR: The extent to which non-impairing sleep medications could reduce the burden posed by motor vehicle accidents is illustrated and can serve as the basis for similar investigations.
Abstract: Background: Although various prescription drugs may be equally effective in promoting sleep, some may lead to substantial impairment in psychomotor functioning and an increased risk of motor vehicle accidents. Objective: To develop a general model to evaluate the potential effects of sleep medications on motor vehicle accidents and costs, and apply the model to the French setting. Methods: Impairment in driving performance, as evaluated by randomised controlled open-road studies using the standard deviation of a vehicle’s lateral position (SDLP), a measure of weaving, was expressed in terms of equivalent blood alcohol (ethanol) concentration (BAC). Epidemiological data were then used to relate BAC to the excess risk of motor vehicle accidents. Although a non-impairing medication would not increase risk, a medication that produces mild impairment in driving performance (a change of 2.5cm in SDLP, equivalent to 0.05% BAC) would increase motor vehicle accident risk by 25%. One that leads to moderate impairment (an SDLP change of 4.5cm, equivalent to 0.08% BAC) would roughly double this risk, and a severely impairing medication (an SDLP change of 7cm, equivalent to 0.12% BAC) would result in up to a 5-fold increase in motor vehicle accident risk. For application to the French setting, a hypothetical cohort of 100 000 adult drivers with insomnia was assumed to be treated for 14 days either with zaleplon 10mg, a new sleep medication that has been shown not to significantly impair driving performance, or zopiclone 7.5mg, which has been shown to cause moderate impairment. Results: Compared with zaleplon, use of zopiclone over 14 days in France would be expected to result in 503 excess accidents per 100 000 drivers at an additional cost of 158 French francs (31 US dollars) per person (1996 values). Conclusions:Our model illustrates the extent to which non-impairing sleep medications could reduce the burden posed by motor vehicle accidents. Our model is designed to be general, and thus can serve as the basis for similar investigations.

Journal Article
TL;DR: The various therapeutic approaches used and their respective costs were explored and it was found that predominance of compression therapy did not preclude use of a variety of other therapeutic methods depending on the clinical and demographic situation of the patient.
Abstract: Objectives The purpose of this study was to better ascertain how French physicians manage venous ulcers of the lower limbs. We explored the various therapeutic approaches used and their respective costs. Particular attention was focused on dressing prescriptions. Material and methods A prospective medicoeconomic study was conducted. Eight hundred general practitioners and specialists throughout France were included and followed two patients each, one with a "new ulcer" (less than two weeks) and another with a "longstanding ulcer" (more than six weeks). Patients were followed to healing or for up to six months. An observation chart was completed at each visit. Data collected were characteristics of the ulcer at inclusion, assessment of the clinical course, and the nature and the volume of medical care prescribed. Corresponding costs (total cost for the society) were calculated on the basis of 1996 public prices for drugs and the French national health insurance quotations for ambulatory care. For hospital care, cost was calculated from the cost of stay for homogeneous patient groups. Results Files established for 1,098 patients by 652 physicians could be assessed. Elderly female patients predominated in this population (mean age 72 years, 74% women). The length of the ulcer at inclusion was significantly correlated with its duration: 2.82 cm for new ulcers (52.6% of the cases) versus 5.03 cm for longstanding ulcers (47.3%). The mean number of consultations for all patients was 4.8 over a 29-day period. Mean cost resulting from these consultations was 5,827 FF per patient: 48% for care, 33 for drugs, 16% for hospitalizations, and 3% for work lay-off ). Cure was achieved in 77% of the cases within a mean delay of 3 months. Older ulcer was significantly associated with longer treatment (117 days for longstanding ulcers versus 80 days for new ulcers), lower cure rate (67% versus 86%) and higher cost (7 078 FF versus 4 669 FF). Dividing care methods between those using cleaning with compression or not showed that compression was prescribed in 76% of the cases at the inclusion consultation. This predominance of compression therapy did not preclude use of a variety of other therapeutic methods depending on the clinical and demographic situation of the patient. Cost varied accordingly with a mean ranging from 3 160 FF to 6 697 FF depending on the therapeutic attitude. The study also focused on the type and amount of dressings used. Dressings were prescribed for 56 patients in this series. It can be hypothesized that these patients already had dressings. Different indicators show that the absence of prescriptions for dressings concerned less severe and less costly ulcers (4 130 FF versus 5 918 FF for those with dressing prescriptions). Among the 1 042 patients for whom dressings were prescribed, 35% were for occlusive dressings, 29% for ointment dressings and also 24% for both occlusive and ointment dressings. The type was not specified in 55% of the cases. Mean cost for these different categories ranged from 4 921 to 7 019 FF.

01 Jan 2001
TL;DR: In this paper, the existence of a calibration, given a minimizer of the Mumford-Shah functional, remains an open problem and a general framework for the study of this problem is introduced.
Abstract: G. Alberti, G. Bouchitte and G. Dal Maso [The calibration method for the Mumford-Shah functional, C. R. Acad. Sci. Paris 329, Serie I (1999) 249--254] recently found sufficient conditions for the minimizers of the (nonconvex) Mumford-Shah functional. Their method consists in an extension of the calibration method (that is used for the characterization of minimal surfaces), adapted to this functional. The existence of a calibration, given a minimizer of the functional, remains an open problem. We introduce in this paper a general framework for the study of this problem. We first observe that, roughly, the minimization of any functional of a scalar function can be achieved by minimizing a convex functional, in higher dimension. Although this principle is in general too vague, in some situations, including the Mumford-Shah case in dimension one, it can be made more precise and leads to the conclusion that for every minimizer, the calibration exists -- although, still, in a very weak (asymptotical) sense.

Book ChapterDOI
28 Mar 2001
TL;DR: This paper characterize the viability property of a closed subset of paths under an impulse path-dependent differential inclusion using the Viability Theorems for path- dependent differential inclusions.
Abstract: Path-dependent impulse differential inclusions, and in particular, path-dependent hybrid control systems, are defined by a path-dependent differential inclusion (or path-dependent control system, or differential inclusion and control systems with memory) and a path-dependent reset map. In this paper, we characterize the viability property of a closed subset of paths under an impulse path-dependent differential inclusion using the Viability Theorems for path-dependent differential inclusions. Actually, one of the characterizations of the Characterization Theorem is valid for any general impulse evolutionary system that we shall defined in this paper.

Book ChapterDOI
28 Mar 2001
TL;DR: The behavior of the run of an impulse differential inclusion, and, in particular, of a hybrid control system, is "summarized" by the "initialization map" associating with each initial condition the set of new initialized conditions and more generally, by its "substratum".
Abstract: The behavior of the run of an impulse differential inclusion, and, in particular, of a hybrid control system, is "summarized" by the "initialization map" associating with each initial condition the set of new initialized conditions and more generally, by its "substratum", that is a set-valued map associating with a cadence and a state the next reinitialized state. These maps are characterized in several ways, and in particular, as "set-valued" solutions of a system of Hamilton-Jacobi partial differential inclusions, that play the same role than usual Hamilton-Jacobi-Bellman equations in optimal control.

Journal ArticleDOI
TL;DR: In this paper, a reading of the privatization process through the corporate governance theory resulted in the development of a model allowing to take into account, on the one hand, the temporal dimension of the privatization process, and on the other hand, contextual, organizational, governance and strategic variables that influence this process.
Abstract: The French privatizations program is one of the principal programs worldwide regarding the volume of equity issues. A reading of the privatization process through the corporate governance theory resulted in the development of a model allowing to take into account, on the one hand, the temporal dimension of the privatization process, and on the other hand, the contextual, organizational, governance and strategic variables that influence this process. After having replicated a certain number of traditional tests, we tested this model on a sample of 19 French privatized firms over a seven-year horizon, which resulted in the following conclusions. The favorable incidence traditionally attributed to privatizations was not truly confirmed for French privatizations, at least for the considered horizon. Privatization induces a significant positive effect on performance for only a small number of firms. The importance of the effect, however, is subordinate to some of the suggested variables.

Journal ArticleDOI
TL;DR: In this article, the authors present a travail visant a faire le lien, for des materiaux cristallins, entre des theories microscopiques modelisant les interactions atomiques and une description macroscopique de leurs proprietes mecaniques.
Abstract: Resume Nous presentons dans cette Note un travail visant a faire le lien, pour des materiaux cristallins, entre des theories microscopiques modelisant les interactions atomiques et une description macroscopique de leurs proprietes mecaniques. Ce lien apparait comme une asymptotique en la distance inter-atomique du cristal considere. Des modeles macroscopiques existant sont justifies, des modeles (apparemment) nouveaux sont introduits et etudies.