Institution
Paris Dauphine University
Education•Paris, France•
About: Paris Dauphine University is a education organization based out in Paris, France. It is known for research contribution in the topics: Population & Approximation algorithm. The organization has 1766 authors who have published 6909 publications receiving 162747 citations. The organization is also known as: Paris Dauphine & Dauphine.
Topics: Population, Approximation algorithm, Bounded function, Parameterized complexity, Time complexity
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, the authors present several techniques for accelerating the convergence of Markov Chain Monte Carlo (MCMC) algorithms at the exploration level and at the exploitation level (with Rao-Blackwellization and scalable methods).
Abstract: Markov chain Monte Carlo algorithms are used to simulate from complex statistical distributions by way of a local exploration of these distributions. This local feature avoids heavy requests on understanding the nature of the target, but it also potentially induces a lengthy exploration of this target, with a requirement on the number of simulations that grows with the dimension of the problem and with the complexity of the data behind it. Several techniques are available toward accelerating the convergence of these Monte Carlo algorithms, either at the exploration level (as in tempering, Hamiltonian Monte Carlo and partly deterministic methods) or at the exploitation level (with Rao-Blackwellization and scalable methods). This article is categorized under: Statistical and Graphical Methods of Data Analysis > Markov Chain Monte Carlo (MCMC)Algorithms and Computational Methods > AlgorithmsStatistical and Graphical Methods of Data Analysis > Monte Carlo Methods.
85 citations
•
TL;DR: Markov chain Monte Carlo algorithms are used to simulate from complex statistical distributions by way of a local exploration of these distributions, which avoids heavy requests on understanding the nature of the target, but it also potentially induces a lengthy exploration of this target.
Abstract: Markov chain Monte Carlo algorithms are used to simulate from complex statistical distributions by way of a local exploration of these distributions. This local feature avoids heavy requests on understanding the nature of the target, but it also potentially induces a lengthy exploration of this target, with a requirement on the number of simulations that grows with the dimension of the problem and with the complexity of the data behind it. Several techniques are available towards accelerating the convergence of these Monte Carlo algorithms, either at the exploration level (as in tempering, Hamiltonian Monte Carlo and partly deterministic methods) or at the exploitation level (with Rao-Blackwellisation and scalable methods).
84 citations
••
TL;DR: In this paper, the authors developed and evaluated a reliable and valid scale for the measurement of online retail service quality, specifically in the French context, called E-tail SQ, which is a 15-item scale to measure five key user values (labelled ease of use, information content, fulfilment reliability, security/privacy and post-purchase customer service).
Abstract: The purpose of this paper is to design, develop and evaluate a reliable and valid scale for the measurement of online retail service quality, specifically in the French context. Design/methodology/approach - Study 1 derived scale items from the literature by content analysis. Study 2 extracted items from two quantitative data sets, gathered by questionnaire from 172 and 125 online shoppers, by exploratory factor and reliability analyses. Study 3 applied psychometric testing and confirmatory factor analysis to data from a survey of 178 e-shoppers. Findings - The outcome is "E-tail SQ", a 15-item scale to measure five key user values (labelled ease of use, information content, fulfilment reliability, security/privacy and post-purchase customer service). These scale items derived from French data are found to be similar to those identified in previous international studies, except that French e-shoppers place more emphasis than their English-speaking counterparts on internet security and privacy of personal information. Research limitations/implications - The sample profiles place limits on the applicability of the scale across markets and service categories. Further research must be conducted to improve its external validity. Practical implications - "E-tail SQ" can help online retailers in the French marketplace to measure service quality delivered, and thereby to improve it, and may be transferable to other national markets. Originality/value - This new scale for the measurement of service quality in a specific cultural environment offers online retailers a framework within which to manage their web-based relationships with a growing number of online shoppers.
84 citations
••
30 Apr 2013
TL;DR: In this paper, the authors propose a framework for the use of analytics in supporting the policy cycle and conceptualize it as "Policy Analytics" to characterize the public policy cycle of design, testing, implementation, evaluation and review of public policies.
Abstract: The growing impact of the “analytics” perspective in recent years, which integrates advanced data-mining and learning methods, is often associated with increasing access to large databases and with decision support systems. Since its origin, the field of analytics has been strongly business-oriented, with a typical focus on data-driven decision processes. In public decisions, however, issues such as individual and social values, culture and public engagement are more important and, to a large extent, characterise the policy cycle of design, testing, implementation, evaluation and review of public policies. Therefore public policy making seems to be a much more socially complex process than has hitherto been considered by most analytics methods and applications. In this paper, we thus suggest a framework for the use of analytics in supporting the policy cycle—and conceptualise it as “Policy Analytics”.
84 citations
••
TL;DR: An extensive experimental study on several well‐known data sets was performed where two different approaches were compared: the popular rough set based rule induction algorithm LEM2 generating classification rules, and the own algorithm Explore—specific for discovery perspective.
Abstract: This paper discusses induction of decision rules from data tables representing information about a set of objects described by a set of attributes. If the input data contains inconsistencies, rough sets theory can be used to handle them. The most popular perspectives of rule induction are classification and knowledge discovery. The evaluation of decision rules is quite different depending on the perspective. Criteria for evaluating the quality of a set of rules are presented and discussed. The degree of conflict and the possibility of achieving a satisfying compromise between criteria relevant to classification and criteria relevant to discovery are then analyzed. For this purpose, we performed an extensive experimental study on several well-known data sets where we compared two different approaches: (1) the popular rough set based rule induction algorithm LEM2 generating classification rules, (2) our own algorithm Explore - specific for discovery perspective.
84 citations
Authors
Showing all 1819 results
Name | H-index | Papers | Citations |
---|---|---|---|
Pierre-Louis Lions | 98 | 283 | 57043 |
Laurent D. Cohen | 94 | 417 | 42709 |
Chris Bowler | 87 | 288 | 35399 |
Christian P. Robert | 75 | 535 | 36864 |
Albert Cohen | 71 | 368 | 19874 |
Gabriel Peyré | 65 | 303 | 16403 |
Kerrie Mengersen | 65 | 737 | 20058 |
Nader Masmoudi | 62 | 245 | 10507 |
Roland Glowinski | 61 | 393 | 20599 |
Jean-Michel Morel | 59 | 302 | 29134 |
Nizar Touzi | 57 | 224 | 11018 |
Jérôme Lang | 57 | 277 | 11332 |
William L. Megginson | 55 | 169 | 18087 |
Alain Bensoussan | 55 | 417 | 22704 |
Yves Meyer | 53 | 128 | 14604 |