scispace - formally typeset
Search or ask a question

Showing papers by "Paris Dauphine University published in 2017"


Journal ArticleDOI
TL;DR: In this article, the stability of a many-body localized material in contact with an ergodic grain was investigated and it was shown that the ergodics are always present as Griffiths regions where disorder is anomalously small, and hence, the authors conclude that the localized phase in such materials is unstable, strictly speaking.
Abstract: Many-body localization plays an increasing role in condensed matter theory, both because it challenges the fundaments of statistical physics, and because it allows us to engineer several new, exotic, stable phases of matter. In this paper, the authors address the issue of the stability of a many-body localized material in contact with an ergodic grain, i.e., an imperfect bath made of a few interacting degrees of freedom. Thanks to detailed microscopic analysis and numerics, they conclude that such an ergodic grain eventually destabilizes the localized phase in the following cases: if the spatial dimension is higher than one, or if the spatial dimension is one but the localization length of the localized material is larger than a fixed threshold value. In realistic materials, these ergodic grains are always present as Griffiths regions where the disorder is anomalously small, and hence, the authors conclude that the localized phase in such materials is unstable, strictly speaking. Transport and thermalization are however exponentially suppressed in the distance between ergodic grains.

331 citations


Journal ArticleDOI
TL;DR: This article used data from a French university to analyze gender biases in student evaluations of teaching (SETs) and found that male students express a bias in favor of male professors, despite the fact that students appear to learn as much from women as from men.

296 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a quantum many-body system with a high-frequency periodic driving has a quasiconserved extensive quantity, which plays the role of an effective static Hamiltonian, and that the energy absorption rate is exponentially small in the driving frequency.
Abstract: We establish some general dynamical properties of quantum many-body systems that are subject to a high-frequency periodic driving. We prove that such systems have a quasiconserved extensive quantity ${H}_{*}$, which plays the role of an effective static Hamiltonian. The dynamics of the system (e.g., evolution of any local observable) is well approximated by the evolution with the Hamiltonian ${H}_{*}$ up to time ${\ensuremath{\tau}}_{*}$, which is exponentially large in the driving frequency. We further show that the energy absorption rate is exponentially small in the driving frequency. In cases where ${H}_{*}$ is ergodic, the driven system prethermalizes to a thermal state described by ${H}_{*}$ at intermediate times $t\ensuremath{\lesssim}{\ensuremath{\tau}}_{*}$, eventually heating up to an infinite-temperature state after times $t\ensuremath{\sim}{\ensuremath{\tau}}_{*}$. Our results indicate that rapidly driven many-body systems generically exhibit prethermalization and very slow heating. We briefly discuss implications for experiments which realize topological states by periodic driving.

289 citations


Journal ArticleDOI
01 Oct 2017
TL;DR: How Riemannian approaches have been used for EEG-based BCI, in particular for feature representation and learning, classifier design and calibration time reduction are reviewed.
Abstract: Although promising from numerous applications, current brain–computer interfaces (BCIs) still suffer from a number of limitations. In particular, they are sensitive to noise, outliers and the non-stationarity of electroencephalographic (EEG) signals, they require long calibration times and are not reliable. Thus, new approaches and tools, notably at the EEG signal processing and classification level, are necessary to address these limitations. Riemannian approaches, spearheaded by the use of covariance matrices, are such a very promising tool slowly adopted by a growing number of researchers. This article, after a quick introduction to Riemannian geometry and a presentation of the BCI-relevant manifolds, reviews how these approaches have been used for EEG-based BCI, in particular for feature representation and learning, classifier design and calibration time reduction. Finally, relevant challenges and promising research directions for EEG signal classification in BCIs are identified, such as feature tracking on manifold or multi-task learning.

280 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the extensive literature on systemic risk and connect it to the current regulatory debate, and identify a gap between two main approaches: the first one studies different sources of systemic risk in isolation, uses confidential data, and inspires targeted but complex regulatory tools; the second approach uses market data to produce global measures which are not directly connected to any particular theory, but could support a more efficient regulation.
Abstract: We review the extensive literature on systemic risk and connect it to the current regulatory debate. While we take stock of the achievements of this rapidly growing field, we identify a gap between two main approaches. The first one studies different sources of systemic risk in isolation, uses confidential data, and inspires targeted but complex regulatory tools. The second approach uses market data to produce global measures which are not directly connected to any particular theory, but could support a more efficient regulation. Bridging this gap will require encompassing theoretical models and improved data disclosure.

269 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the Fermi-Hubbard model with periodic driving at high frequency and showed that up to a quasi-exponential time, the system barely absorbs energy.
Abstract: Prethermalization refers to the transient phenomenon where a system thermalizes according to a Hamiltonian that is not the generator of its evolution. We provide here a rigorous framework for quantum spin systems where prethermalization is exhibited for very long times. First, we consider quantum spin systems under periodic driving at high frequency $${ u}$$ . We prove that up to a quasi-exponential time $${\tau_* \sim {\rm e}^{c \frac{ u}{\log^3 u}}}$$ , the system barely absorbs energy. Instead, there is an effective local Hamiltonian $${\widehat D}$$ that governs the time evolution up to $${\tau_*}$$ , and hence this effective Hamiltonian is a conserved quantity up to $${\tau_*}$$ . Next, we consider systems without driving, but with a separation of energy scales in the Hamiltonian. A prime example is the Fermi–Hubbard model where the interaction U is much larger than the hopping J. Also here we prove the emergence of an effective conserved quantity, different from the Hamiltonian, up to a time $${\tau_*}$$ that is (almost) exponential in $${U/J}$$ .

259 citations



Journal ArticleDOI
TL;DR: The null hypothesis significance testing (NHST) paradigm poses problems for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors.
Abstract: We discuss problems the null hypothesis significance testing (NHST) paradigm poses for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors. We then discuss our own proposal, which is to abandon statistical significance. We recommend dropping the NHST paradigm--and the p-value thresholds intrinsic to it--as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence. We have no desire to "ban" p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors. We also argue that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures. We offer recommendations for how our proposal can be implemented in the scientific publication process as well as in statistical decision making more broadly.

232 citations


Journal ArticleDOI
TL;DR: The findings show that perceived uselessness, perceived price, intrusiveness, perceived novelty and self-efficacy have an impact on consumer resistance to smart products.
Abstract: The Internet of Things (IoT) market is set to grow rapidly. Although IoT offers new opportunities, it nevertheless raises challenges. The objective of this research is to develop a better understanding of the reasons underlying consumer resistance to smart and connected products. To this end, a quantitative survey was carried out to understand resistance to smartwatch. Structural equation modelling was used to test the conceptual model. The findings show that perceived uselessness, perceived price, intrusiveness, perceived novelty and self-efficacy have an impact on consumer resistance to smart products. Moreover, privacy concerns have an effect on intrusiveness and dependence impacts privacy concerns. To our best knowledge, this is the first research studying smart products through a resistance approach.

204 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the propensity of consumers to give in to temptation on second-hand peer-to-peer (P2P) platforms, which provided a favorable context for self-licensing behaviors.

149 citations


Journal ArticleDOI
TL;DR: A simple theory is presented that assumes a system to be locally ergodic unless the local relaxation time determined by Fermi's golden rule is larger than the inverse level spacing and predicts a critical value for the localization length that is perfectly consistent with the numerical calculations.
Abstract: Numerical simulations show that an imperfection as small as three units can destabilize the many-body localized phase of a one-dimensional chain.

Journal ArticleDOI
TL;DR: The authors report a comprehensive review of the published reading studies on retrieval interference in reflexive/reciprocal-antecedent and subject-verb dependencies and provide a quantitative random-effects meta-analysis of eyetracking and self-paced reading studies.

Posted Content
TL;DR: This article introduces a new class of fast algorithms to approx-imate variational problems involving unbalanced optimal transport, and shows how these methods can be used to solve unbalanced transport, unbalanced gradient flows, and to compute unbalanced barycenters.
Abstract: This article introduces a new class of fast algorithms to approx-imate variational problems involving unbalanced optimal transport. While classical optimal transport considers only normalized probability distributions, it is important for many applications to be able to compute some sort of re-laxed transportation between arbitrary positive measures. A generic class of such “unbalanced” optimal transport problems has been recently proposed by several authors. In this paper, we show how to extend the, now classical, entropic regularization scheme to these unbalanced problems. This gives rise to fast, highly parallelizable algorithms that operate by performing only diagonal scaling (i.e. pointwise multiplications) of the transportation couplings. They are generalizations of the celebrated Sinkhorn algorithm. We show how these methods can be used to solve unbalanced transport, unbalanced gradient flows, and to compute unbalanced barycenters. We showcase applications to 2-D shape modification, color transfer, and growth models.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the Boltzmann equation in a strictly convex domain with the specular, bounce-back and diffuse boundary conditions and construct weighted classical solutions away from the grazing set for all boundary conditions.
Abstract: A basic question about regularity of Boltzmann solutions in the presence of physical boundary conditions has been open due to characteristic nature of the boundary as well as the non-local mixing of the collision operator. Consider the Boltzmann equation in a strictly convex domain with the specular, bounce-back and diffuse boundary condition. With the aid of a distance function toward the grazing set, we construct weighted classical $$C^{1}$$ solutions away from the grazing set for all boundary conditions. For the diffuse boundary condition, we construct $$W^{1,p}$$ solutions for $$1< p<2$$ and weighted $$ W^{1,p}$$ solutions for $$2\le p\le \infty $$ as well.

Journal ArticleDOI
TL;DR: In this article, a learning procedure similar to the Fictitious play was proposed for mean field games. But the convergence of the learning procedure was only shown when the mean field game is potential.
Abstract: Mean Field Game systems describe equilibrium configurations in differential games with infinitely many infinitesimal interacting agents. We introduce a learning procedure (similar to the Fictitious Play) for these games and show its convergence when the Mean Field Game is potential.

Posted Content
TL;DR: In this article, affine Volterra processes are defined as solutions of certain stochastic convolution equations with affine coefficients, which are neither semimartingales nor Markov processes in general.
Abstract: We introduce affine Volterra processes, defined as solutions of certain stochastic convolution equations with affine coefficients. Classical affine diffusions constitute a special case, but affine Volterra processes are neither semimartingales, nor Markov processes in general. We provide explicit exponential-affine representations of the Fourier-Laplace functional in terms of the solution of an associated system of deterministic integral equations of convolution type, extending well-known formulas for classical affine diffusions. For specific state spaces, we prove existence, uniqueness, and invariance properties of solutions of the corresponding stochastic convolution equations. Our arguments avoid infinite-dimensional stochastic analysis as well as stochastic integration with respect to non-semimartingales, relying instead on tools from the theory of finite-dimensional deterministic convolution equations. Our findings generalize and clarify recent results in the literature on rough volatility models in finance.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the recovery properties of the support and amplitudes of the initial Radon measure in the presence of noise as a function of the minimum separation t of the input measure (the minimum distance between two spikes).
Abstract: We study sparse spikes super-resolution over the space of Radon measures on \(\mathbb {R}\) or \(\mathbb {T}\) when the input measure is a finite sum of positive Dirac masses using the BLASSO convex program. We focus on the recovery properties of the support and the amplitudes of the initial measure in the presence of noise as a function of the minimum separation t of the input measure (the minimum distance between two spikes). We show that when \({w}/\lambda \), \({w}/t^{2N-1}\) and \(\lambda /t^{2N-1}\) are small enough (where \(\lambda \) is the regularization parameter, w the noise and N the number of spikes), which corresponds roughly to a sufficient signal-to-noise ratio and a noise level small enough with respect to the minimum separation, there exists a unique solution to the BLASSO program with exactly the same number of spikes as the original measure. We show that the amplitudes and positions of the spikes of the solution both converge toward those of the input measure when the noise and the regularization parameter drops to zero faster than \(t^{2N-1}\).

Journal ArticleDOI
TL;DR: In this paper, a recursive dynamic model reveals that there is an additional spread-widening effect as market makers earn higher rents due to economies of scope from quote monitoring, and the bid-ask spread is raised in response.
Abstract: Speeding up the exchange does not necessarily improve liquidity. The price quotes of high-frequency market makers are more likely to meet speculative high-frequency “bandits,” thus less likely to meet liquidity traders. The bid-ask spread is raised in response. The recursive dynamic model reveals that there is an additional spread-widening e ect as market makers earn higher rents due to economies of scope from quote monitoring. Analysis of a NASDAQ-OMX speed upgrade provides supportive evidence.

Proceedings Article
11 Nov 2017
TL;DR: The use of deep residual neural is investigated to solve the problem of detecting the artistic style of a painting and outperform existing approaches to reach an accuracy of 62 on the Wikipaintings dataset (for 25 different style).
Abstract: The artistic style (or artistic movement) of a painting is a rich descriptor that captures both visual and historical information about the painting. Correctly identifying the artistic style of a paintings is crucial for indexing large artistic databases. In this paper, we investigate the use of deep residual neural to solve the problem of detecting the artistic style of a painting and outperform existing approaches to reach an accuracy of 62 on the Wikipaintings dataset (for 25 different style). To achieve this result, the network is first pre-trained on ImageNet, and deeply retrained for artistic style. We empirically evaluate that to achieve the best performance, one need to retrain about 20 layers. This suggests that the two tasks are as similar as expected, and explain the previous success of hand crafted features. We also demonstrate that the style detected on the Wikipaintings dataset are consistent with styles detected on an independent dataset and describe a number of experiments we conducted to validate this approach both qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: A new method for multiple criteria ordinal classification (sorting) problems that fulfills a set of structural requirements: uniqueness of the assignments, independence, monotonicity, homogeneity, conformity, and stability with respect to merging and splitting operations.

Journal ArticleDOI
TL;DR: An in vitro human model that mimics an effective endothelial sprouting angiogenesis event triggered from an initial microvessel using a single angiogenic factor, VEGF-A, and demonstrates that sorafenib impairs the endothelial barrier function, while sunitinib does not.

Journal ArticleDOI
TL;DR: In this article, the authors study a continuous-time financial market with continuous price processes under model uncertainty, modeled via a family of possible physical measures, and show that a nonnegative, nonvanishing claim cannot be superhedged for free by using simple trading strategies.
Abstract: We study a continuous-time financial market with continuous price processes under model uncertainty, modeled via a family inline image of possible physical measures. A robust notion inline image of no-arbitrage of the first kind is introduced; it postulates that a nonnegative, nonvanishing claim cannot be superhedged for free by using simple trading strategies. Our first main result is a version of the fundamental theorem of asset pricing: inline image holds if and only if every inline image admits a martingale measure that is equivalent up to a certain lifetime. The second main result provides the existence of optimal superhedging strategies for general contingent claims and a representation of the superhedging price in terms of martingale measures.

Posted Content
TL;DR: Why modular approaches might be preferable to the full model in misspecified settings is investigated and a principled criteria to choose between modular and full-model approaches is proposed.
Abstract: In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference, prediction, or decision problem. In such circumstances, it is convenient to use a graphical model to represent the statistical dependencies, via a set of connected "modules", each relating to a specific data modality, and drawing on specific domain expertise in their development. In principle, given data, the conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of any module can contaminate the estimate and update of others, often in unpredictable ways. In various settings, particularly when certain modules are trusted more than others, practitioners have preferred to avoid learning with the full model in favor of approaches that restrict the information propagation between modules, for example by restricting propagation to only particular directions along the edges of the graph. In this article, we investigate why these modular approaches might be preferable to the full model in misspecified settings. We propose principled criteria to choose between modular and full-model approaches. The question arises in many applied settings, including large stochastic dynamical systems, meta-analysis, epidemiological models, air pollution models, pharmacokinetics-pharmacodynamics, and causal inference with propensity scores.

Journal ArticleDOI
TL;DR: A two-phase iterative heuristic to solve the integrated scheduling problem of a production scheduling and vehicle routing problem with job splitting and delivery time windows in a company working in the metal packaging industry is developed.
Abstract: In this paper, we study a production scheduling and vehicle routing problem with job splitting and delivery time windows in a company working in the metal packaging industry. In this problem, a set of jobs has to be processed on unrelated parallel machines with job splitting and sequence-dependent setup time (cost). Then the finished products are delivered in batches to several customers with heterogeneous vehicles, subject to delivery time windows. The objective of production is to minimize the total setup cost and the objective of distribution is to minimize the transportation cost. We propose mathematical models for decentralized scheduling problems, where a production schedule and a distribution plan are built consecutively. We develop a two-phase iterative heuristic to solve the integrated scheduling problem. We evaluate the benefits of coordination through numerical experiments.

Posted Content
TL;DR: In this paper, a U-shaped relationship between the amount of public sponsorship received and the market performance of sponsored organizations was found to be moderated by the breadth, depth and focus of the focal organization's resource accumulation and allocation patterns.
Abstract: Existing research provides contradictory insights on the effect of public sponsorship on the market performance of organizations. We develop the nascent theory on sponsorship by highlighting the dual and contingent nature of the relationship between public sponsorship and market performance. By arguing that sponsorship differentially affects resource accumulation and allocation mechanisms, we suggest two opposing firm-level effects, leading to an inverted U-shaped relationship between the amount of public sponsorship received and the market performance of sponsored organizations. This non-linear relationship, we argue, is moderated by the breadth, depth and focus of the focal organization's resource accumulation and allocation patterns. While horizontal scope (i.e. increased breadth) and externally oriented resource profile (i.e. reduced depth) strengthen the relationship, market orientation (i.e. increased focus) attenuates it. We test and find strong support for our hypotheses using population data on French film production firms from 1998 to 2008. Our work highlights the performance trade-offs associated with public sponsorship, and carries important managerial and policy implications.

Journal ArticleDOI
TL;DR: In this article, the authors studied the geometrical properties of the solution to the so-called Rudin-Osher-Fatemi total variation denoising method.
Abstract: This article studies the denoising performance of total variation (TV) image regularization. More precisely, we study geometrical properties of the solution to the so-called Rudin-Osher-Fatemi total variation denoising method. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. It is intuitively the region which is unstable and will suffer from the staircasing effect. We highlight in several practical cases, such as the indicator of convex sets, that this region can be determined explicitly. Our second and main contribution is a proof that the TV denoising method indeed restores an image which is exactly constant outside a small tube surrounding the extended support. The radius of this tube shrinks toward zero as the noise level vanishes, and are able to determine, in some cases, an upper bound on the convergence rate. For indicators of so-called “calibrable” sets (such as disks or properly eroded squares), this extended support matches the edges, so that discontinuities produced by TV denoising cluster tightly around the edges. In contrast, for indicators of more general shapes or for complicated images, this extended support can be larger. Beside these main results, our paper also proves several intermediate results about fine properties of TV regularization, in particular for indicators of calibrable and convex sets, which are of independent interest.

Journal ArticleDOI
TL;DR: In this paper, the authors develop a nascent theory on the effect of public sponsorship on the market performance of organizations and develop the nascent theory by highlighting the shortcomings of existing research on public advertising.
Abstract: Existing research provides contradictory insights regarding the effect of public sponsorship on the market performance of organizations. We develop the nascent theory on sponsorship by highlighting...

Journal ArticleDOI
TL;DR: In this paper, the authors consider a contracting problem in which a principal hires an agent to manage a risky project, and they develop a novel approach to solving principal-agent problems: first, they identify a family of admissible contracts for which the optimal agent's action is explicitly characterized; then, they show that we do not lose on generality when finding the optimal contract inside this family, up to integrability conditions.
Abstract: We consider a contracting problem in which a principal hires an agent to manage a risky project. When the agent chooses volatility components of the output process and the principal observes the output continuously, the principal can compute the quadratic variation of the output, but not the individual components. This leads to moral hazard with respect to the risk choices of the agent. To find the optimal contract, we develop a novel approach to solving principal–agent problems: first, we identify a family of admissible contracts for which the optimal agent’s action is explicitly characterized; then, we show that we do not lose on generality when finding the optimal contract inside this family, up to integrability conditions. To do this, we use the recent theory of singular changes of measures for Ito processes. We solve the problem in the case of CARA preferences and show that the optimal contract is linear in these factors: the contractible sources of risk, including the output, the quadratic variation of the output and the cross-variations between the output and the contractible risk sources. Thus, like sample Sharpe ratios used in practice, path-dependent contracts naturally arise when there is moral hazard with respect to risk management. In a numerical example, we show that the loss of efficiency can be significant if the principal does not use the quadratic variation component of the optimal contract.

Journal ArticleDOI
TL;DR: This paper elaborate a specific neighborhood structure among local upper bounds that helps to determine and efficiently update the search region, which corresponds to the part of the objective space where new nondominated points could lie.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a model in which investors can acquire either raw or processed information about the payoff of a risky asset and show that a decline in the cost of raw information can reduce the demand for processed information and, for this reason, the informativeness of the asset price in the long run.
Abstract: We consider a model in which investors can acquire either raw or processed information about the payoff of a risky asset. Information processing filters out the noise in raw information but it takes time. Hence, investors buying processed information trade with a lag relative to investors buying raw information. As the cost of raw information declines, more investors trade on it, which reduces the value of processed information, unless raw information is very unreliable. Thus, a decline in the cost of raw information can reduce the demand for processed information and, for this reason, the informativeness of the asset price in the long run.