scispace - formally typeset
Search or ask a question

Showing papers by "Laurent Viennot published in 2022"


TL;DR: This work shows that, with high probability, it is possible to approximate any CNN by pruning a random CNN whose size is larger by a logarithmic factor.
Abstract: The lottery ticket hypothesis states that a randomly-initialized neural network contains a small subnetwork which, when trained in isolation, can compete with the performance of the original network. Recent theoretical works proved an even stronger version: every sufficiently overparameterized (dense) neural network contains a subnetwork that, even without training, achieves accuracy comparable to that of the trained large network. These works left as an open problem to extend the result to convolutional neural networks (CNNs). In this work we provide such generalization by showing that, with high probability, it is possible to approximate any CNN by pruning a random CNN whose size is larger by a logarithmic factor.

8 citations



29 Apr 2022
TL;DR: This work presents an alternative proof for the Subset Sum Problem, with a more direct approach and resourcing to more elementary tools, in the hope of disseminating it even further.
Abstract: The average properties of the well-known Subset Sum Problem can be studied by the means of its randomised version, where we are given a target value $z$, random variables $X_1, \ldots, X_n$, and an error parameter $\varepsilon>0$, and we seek a subset of the $X_i$s whose sum approximates $z$ up to error $\varepsilon$. In this setup, it has been shown that, under mild assumptions on the distribution of the random variables, a sample of size $\mathcal{O}(\log(1/\varepsilon))$ suffices to obtain, with high probability, approximations for all values in $[-1/2, 1/2]$. Recently, this result has been rediscovered outside the algorithms community, enabling meaningful progress in other fields. In this work we present an alternative proof for this theorem, with a more direct approach and resourcing to more elementary tools.

3 citations


Journal ArticleDOI
30 Aug 2022-Networks
TL;DR: In this paper , the authors consider the problem of turning a collection of walks (called trips) in a directed graph into a temporal graph by assigning a starting time to each trip in order to maximize the reachability among pairs of nodes.
Abstract: In a temporal graph, each edge appears and can be traversed at specific points in time. In such a graph, temporal reachability of one node from another is naturally captured by the existence of a temporal path where edges appear in chronological order. Inspired by the optimization of bus/metro/tramway schedules in a public transport network, we consider the problem of turning a collection of walks (called trips) in a directed graph into a temporal graph by assigning a starting time to each trip in order to maximize the reachability among pairs of nodes. Each trip represents the trajectory of a vehicle and its edges must be scheduled one right after another. Setting a starting time to the trip thus forces the appearance time of all its edges. We call such a starting time assignment a trip temporalization. We obtain several results about the complexity of maximizing reachability via trip temporalization. Among them, we show that maximizing reachability via trip temporalization is hard to approximate within a factor n/12$$ \sqrt{n}/12 $$ in an n$$ n $$ ‐vertex digraph, even if we assume that for each pair of nodes, there exists a trip temporalization connecting them. On the positive side, we show that there must exist a trip temporalization connecting a constant fraction of all pairs if we additionally assume symmetry, that is, when the collection of trips to be scheduled is such that, for each trip, there is a symmetric trip visiting the same nodes in reverse order.

2 citations



Journal ArticleDOI
TL;DR: Grohe et al. as mentioned in this paper proposed a truly subquadratic-time parameterized algorithm for computing the diameter on unweighted graphs of constant distance Vapnik-Chervonenkis (VC)-dimension.
Abstract: Under the strong exponential-time hypothesis, the diameter of general unweighted graphs cannot be computed in truly subquadratic time (in the size of the input), as shown by Roditty and Williams. Nevertheless there are several graph classes for which this can be done such as bounded-treewidth graphs, interval graphs, and planar graphs, to name a few. We propose to study unweighted graphs of constant distance Vapnik–Chervonenkis (VC)-dimension as a broad generalization of many such classes—where the distance VC-dimension of a graph is defined as the VC-dimension of its ball hypergraph whose hyperedges are the balls of all possible radii and centers in . In particular for any fixed , the class of -minor free graphs has distance VC-dimension at most . Our first main result is a Monte Carlo algorithm that on graphs of distance VC-dimension at most , for any fixed , either computes the diameter or concludes that it is larger than in time , where only depends on and the notation suppresses polylogarithmic factors. We thus obtain a truly subquadratic-time parameterized algorithm for computing the diameter on such graphs. Then as a byproduct of our approach, we get a truly subquadratic-time randomized algorithm for constant diameter computation on all the nowhere dense graph classes. The latter classes include all proper minor-closed graph classes, bounded-degree graphs, and graphs of bounded expansion. Before our work, the only known such algorithm was resulting from an application of Courcelle’s theorem; see Grohe, Kreutzer, and Siebertz [J. ACM, 64 (2017), pp. 1–32]. For any graph of constant distance VC-dimension, we further prove the existence of an exact distance oracle in truly subquadratic space, that answers distance queries in truly sublinear time (in the number of vertices). The latter generalizes prior results on proper minor-closed graph classes to a much larger graph class. Finally, we show how to remove the dependency on for any graph class that excludes a fixed graph as a minor. More generally, our techniques apply to any graph with constant distance VC-dimension and polynomial expansion (or equivalently having strongly sublinear balanced separators). As a result for all such graphs one obtains a truly subquadratic-time deterministic algorithm for computing all the eccentricities, and thus both the diameter and the radius. Our approach can be generalized to the -minor free graphs with bounded positive integer weights. We note that all our algorithms for the diameter problem can be adapted for computing the radius, and more generally all the eccentricities. Our approach is based on the work of Chazelle and Welzl who proved the existence of spanning paths with strongly sublinear stabbing number for every hypergraph of constant VC-dimension. We show how to compute such paths efficiently by combining known algorithms for the stabbing number problem with a clever use of -nets, region decomposition, and other partition techniques.

Book ChapterDOI
01 Jan 2022
TL;DR: Coudert et al. as mentioned in this paper proposed and evaluated an approach that uses a hierarchy of distance-k dominating sets to reduce the search space of graph hyperbolicity, which is a graph parameter related to how much a graph resembles a tree with respect to distances.
Abstract: Previous chapter Next chapter Full AccessProceedings 2022 Proceedings of the Symposium on Algorithm Engineering and Experiments (ALENEX)Computing Graph Hyperbolicity Using Dominating SetsDavid Coudert, André Nusser, and Laurent ViennotDavid Coudert, André Nusser, and Laurent Viennotpp.78 - 90Chapter DOI:https://doi.org/10.1137/1.9781611977042.7PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Hyperbolicity is a graph parameter related to how much a graph resembles a tree with respect to distances. Its computation is challenging as the main approaches consist in scanning all quadruples of the graph or using fast matrix multiplication as building block, both are not practical for large graphs. In this paper, we propose and evaluate an approach that uses a hierarchy of distance-k dominating sets to reduce the search space. This technique, compared to the previous best practical algorithms, enables us to compute the hyperbolicity of graphs with unprecedented size (up to a million nodes). Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-704-2 https://doi.org/10.1137/1.9781611977042Book Series Name:ProceedingsBook Code:PRAL22Book Pages:iii-220Key words:Gromov hyperbolicity, graph algorithms, algorithm engineering