scispace - formally typeset
Search or ask a question
Author

Przemyslaw Uznanski

Bio: Przemyslaw Uznanski is an academic researcher from University of Wrocław. The author has contributed to research in topics: Population & Vertex (geometry). The author has an hindex of 13, co-authored 94 publications receiving 631 citations. Previous affiliations of Przemyslaw Uznanski include University of Bordeaux & French Institute for Research in Computer Science and Automation.


Papers
More filters
Journal ArticleDOI
TL;DR: This work provides the first strategy which performs exploration of a graph with n vertices at a distance of at most D from r in time O ( D ) , using a team of agents of polynomial size k = D n 1 + ?
Abstract: We study the following scenario of online graph exploration. A team of k agents is initially located at a distinguished vertex r of an undirected graph. We ask how many time steps are required to complete exploration, i.e., to make sure that every vertex has been visited by some agent.As our main result, we provide the first strategy which performs exploration of a graph with n vertices at a distance of at most D from r in time O ( D ) , using a team of agents of polynomial size k = D n 1 + ? < n 2 + ? , for any ? 0 . Our strategy works in the local communication model, in which agents can only exchange information when located at a vertex, without knowledge of global parameters such as n or D.We also obtain almost-tight bounds on the asymptotic relation between exploration time and team size, for large k, in both the local and the global communication model.

68 citations

Proceedings ArticleDOI
17 Jun 2019
TL;DR: This is the fastest currently known leader election algorithm in which each agent utilises asymptotically optimal number of $\bigo(łogłog n)$ states and incorporates and amalgamates successfully the power of assorted synthetic coins with variable rate phase clocks.
Abstract: The model of population protocols refers to a large collection of simple indistinguishable entities, frequently called \em agents. The agents communicate and perform computation through pairwise interactions. We study fast and space efficient leader election in population of cardinality n governed by a random scheduler, where during each time step the scheduler uniformly at random selects for interaction exactly one pair of agents. We present the first $o(log^2)$-time leader election protocol. It operates in expected parallel time $\bigo(log nloglog n)$ which is equivalent to $\bigo(n log nloglog n)$ pairwise interactions. This is the fastest currently known leader election algorithm in which each agent utilises asymptotically optimal number of $\bigo(loglog n)$ states. The new protocol incorporates and amalgamates successfully the power of assorted \em synthetic coins with variable rate \em phase clocks.

38 citations

Proceedings ArticleDOI
01 Jan 2018
TL;DR: A smooth time trade-off is provided by exhibiting an O~((m+k sqrt{m})* n/m) time algorithm, and a matching conditional lower bound is added, showing that a significantly faster combinatorial algorithm is not possible, unless the combinatorially matrix multiplication conjecture fails.
Abstract: Computing the distance between a given pattern of length n and a text of length m is defined as calculating, for every m-substring of the text, the distance between the pattern and the substring. This naturally generalizes the standard notion of exact pattern matching to incorporate dissimilarity score. For both Hamming and L_{1} distance only relatively slow O~(n sqrt{m}) solutions are known for this generalization. This can be overcome by relaxing the question. For Hamming distance, the usual relaxation is to consider the k-bounded variant, where distances exceeding k are reported as infty, while for L_{1} distance asking for a (1 +/- epsilon)-approximation seems more natural. For k-bounded Hamming distance, Amir et al. [J. Algorithms 2004] showed an O~(n sqrt{k}) time algorithm, and Clifford et al. [SODA 2016] designed an O~((m+k^{2})* n/m) time solution. We provide a smooth time trade-off between these bounds by exhibiting an O~((m+k sqrt{m})* n/m) time algorithm. We complement the trade-off with a matching conditional lower bound, showing that a significantly faster combinatorial algorithm is not possible, unless the combinatorial matrix multiplication conjecture fails. We also exhibit a series of reductions that together allow us to achieve essentially the same complexity for k-bounded L_1 distance. Finally, for (1 +/- epsilon)-approximate L_1 distance, the running time of the best previously known algorithm of Lipsky and Porat [Algorithmica 2011] was O(epsilon^{-2} n). We improve this to O~(epsilon^{-1}n), thus essentially matching the complexity of the best known algorithm for (1 +/- epsilon)-approximate Hamming distance.

33 citations

Proceedings ArticleDOI
23 Jul 2018
TL;DR: For the first time, it is shown that solutions to a number of fundamental tasks in distributed computing can be obtained quickly using finite-state protocols, including leader election, aggregate and threshold functions on the population, such as majority computation, and plurality consensus.
Abstract: A population protocol describes a set of state change rules for a population of n indistinguishable finite-state agents (automata), undergoing random pairwise interactions. Within this very basic framework, it is possible to resolve a number of fundamental tasks in distributed computing, including: leader election, aggregate and threshold functions on the population, such as majority computation, and plurality consensus. For the first time, we show that solutions to all of these problems can be obtained quickly using finite-state protocols. For any input, the designed finite-state protocols converge under a fair random scheduler to an output which is correct with high probability in expected O(polylog n) parallel time. We also show protocols which always reach a valid solution, in expected parallel time O(n^e), where the number of states depends only on the choice of e>0. The stated time bounds hold for any semi-linear predicate computable in the population protocol framework. The key ingredient of our result is the decentralized design of a hierarchy of phase-clocks, which tick at different rates, with the rates of adjacent clocks separated by a factor of Θ(log n). The construction of this clock hierarchy relies on a new protocol composition technique, combined with an adapted analysis of a self-organizing process of oscillatory dynamics. This clock hierarchy is used to provide nested synchronization primitives, which allow us to view the population in a global manner and design protocols using a high-level imperative programming language with a (limited) capacity for loops and branching instructions.

32 citations

Proceedings ArticleDOI
25 Jul 2017
TL;DR: In this paper, the complexity of locally checkable labeling (LCL) problems in 2-dimensional grids was studied, where the complexity classes are the same as in the 1-dimensional case: O(1), Θ(log* n), and O(n).
Abstract: LCLs or locally checkable labelling problems (e.g. maximal independent set, maximal matching, and vertex colouring) in the LOCAL model of computation are very well-understood in cycles (toroidal 1-dimensional grids): every problem has a complexity of O(1), Θ(log* n), or Θ(n), and the design of optimal algorithms can be fully automated. This work develops the complexity theory of LCL problems for toroidal 2-dimensional grids. The complexity classes are the same as in the 1-dimensional case: O(1), Θ(log* n), and Θ(n). However, given an LCL problem it is undecidable whether its complexity is Θ(log* n) or Θ(n) in 2-dimensional grids. Nevertheless, if we correctly guess that the complexity of a problem is Θ(log* n), we can completely automate the design of optimal algorithms. For any problem we can find an algorithm that is of a normal form A' o Sk, where A' is a finite function, Sk is an algorithm for finding a maximal independent set in kth power of the grid, and k is a constant. Finally, partially with the help of automated design tools, we classify the complexity of several concrete LCL problems related to colourings and orientations.

31 citations


Cited by
More filters
Book ChapterDOI
Eric V. Denardo1
01 Jan 2011
TL;DR: This chapter sees how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models” and finds an optimal solution that is integer-valued.
Abstract: In this chapter, you will see how the simplex method simplifies when it is applied to a class of optimization problems that are known as “network flow models.” You will also see that if a network flow model has “integer-valued data,” the simplex method finds an optimal solution that is integer-valued.

828 citations

Journal ArticleDOI
TL;DR: Stephen J. Hartley first provides a complete explanation of the features of Java necessary to write concurrent programs, including topics such as exception handling, interfaces, and packages, and takes a different approach than most Java references.
Abstract: Stephen J. Hartley Oxford University Press, New York, 1998, 260 pp. ISBN 0-19-511315-2, $45.00 Concurrent Programming is a thorough treatment of Java multi-threaded programming for both a stand-alone and distributed environment. Designed mostly for students in concurrent or parallel programming classes, the text is also an excellent reference for the practicing professional developing multi-threaded programs or applets. Hartley first provides a complete explanation of the features of Java necessary to write concurrent programs, including topics such as exception handling, interfaces, and packages. He then gives the reader a solid background to write multi-threaded programs and also presents the problems introduced when writing concurrent programs—namely race conditions, mutual exclusion, and deadlock. Hartley also provides several software solutions that do not require the use of common process and thread mechanisms. Once the groundwork is laid for writing concurrent programs, Hartley then takes a different approach than most Java references. Rather than presenting how Java handles mutual exclusion with the synchronized keyword (although it is covered later), he first looks at semaphore-based solutions to classic concurrent problems such as bounded-buffer, readers-writers, and the dining philosophers. Hartley also uses the same approach to develop Java classes for monitors and message passing. This unique approach to introducing concurrency allows the readers to both understand how Java threads are synchronized and how the basic synchronization mechanism can be used to construct more abstract tools such as semaphores. If there is a shortcoming with the text it is with the lack of sufficient coverage of remote method invocation (RMI), although there is a section covering RMI. This is quite understandable as RMI is a fairly recent phenomenon with the Java community. Also, the classes that Hartley provides could easily implement RMI rather than sockets to handle communication. The strengths of the book include its ease in reading, several examples at the end of chapters, a package similar to Xtango that provides algorithm animation, and a supportive web site by the author (see www.mcs.drexel.edu/~shartley/ConcProgJava/index.html ) including compressed source code. As Java becomes more dominant on the server side of multi-tier applications, writing thread-safe concurrent applications becomes even more important. Concurrent Programming is a strong step towards teaching students and professionals such skills. Greg Gagne, Westminster College of Salt Lake City Salt Lake City, Utah

587 citations

Book
01 Jan 1984
TL;DR: It is argued that the time has come for PIMS even though the approach requires a sharp turn from previous models based on the monetisation of personal data.
Abstract: Population protocols (Angluin et al., PODC 2004) are a formal model of sensor networks consisting of identical mobile devices. When two devices come into the range of each other, they interact and change their states. Computations are infinite sequences of pairwise interactions where the interacting processes are picked by a fair scheduler. A population protocol is well specified if for every initial configuration C of devices and for every fair computation starting at C, all devices eventually agree on a consensus value that only depends on C. If a protocol is well-specified, then it is said to compute the predicate that assigns to each initial configuration its consensus value. The main two verification problems for population protocols are: Is a given protocol well-specified? Does a given protocol compute a given predicate? While the class of predicates computable by population protocols was already established in 2007 (Angluin et al., Distributed Computing), the decidability of the verification problems remained open until 2015, when my colleagues and I finally managed to prove it (Esparza et al., CONCUR 2015, improved version to appear in Acta Informatica). In the talk I report on our results and discuss some new developments. Personal Information Management Systems and Knowledge Integration David Montoya, Thomas Pellissier Tanon, and Serge Abiteboul 1 Engie Ineo & ENS Cachan & Inria 2 ENS Lyon 3 INRIA & ENS Cachan Abstract. Personal data is constantly collected, either voluntarily by users in emails, social media interactions, multimedia objects, calendar items, contacts, etc., or passively by various applications such as GPS of mobile devices, transactions, quantified self sensors, etc. The processing of personal data is complicated by the fact that such data is typically stored in silos with different terminologies/ontologies, formats and access protocoles. Users are more and more loosing control over their data; they are sometimes not even aware of the data collected about them and how it is used. We discuss the new concept of Personal Information Management Systems (PIMS for short) that allows each user to be in a position to manage his/her personal information. Some applications are run directly by the PIMS, so are under direct control of the user. Others are in separate systems, that are willing to share with the PIMS the data they collect about that particular user. In that later case, the PIMS is a system for distributed data management. We argue that the time has come for PIMS even though the approach requires a sharp turn from previous models based on the monetisation of personal data. We consider research issues raised by PIMS, either new or that acquire a new avor in a PIMS context. We also present works on the integration of users data from different sources (such as email messages, calendar, contacts, and location history) into a PIMS. The PIMS we consider is a Knowledge Base System based on Semantic Web standards, notably RDF and schema.org. Some of the knowledge is episodical (typically related to spatio-temporal events) and some is semantic (knowledge that holds irrelative to any such event). Of particular interest is the cross enrichment of these two kinds of knowledge based on the alignment of concepts, e.g., enrichment between a calendar and a geographical map using the location history. The goal is to enable users via the PIMS to query and perform analytics over their personal information within and across different dimensions. Personal data is constantly collected, either voluntarily by users in emails, social media interactions, multimedia objects, calendar items, contacts, etc., or passively by various applications such as GPS of mobile devices, transactions, quantified self sensors, etc. The processing of personal data is complicated by the fact that such data is typically stored in silos with different terminologies/ontologies, formats and access protocoles. Users are more and more loosing control over their data; they are sometimes not even aware of the data collected about them and how it is used. We discuss the new concept of Personal Information Management Systems (PIMS for short) that allows each user to be in a position to manage his/her personal information. Some applications are run directly by the PIMS, so are under direct control of the user. Others are in separate systems, that are willing to share with the PIMS the data they collect about that particular user. In that later case, the PIMS is a system for distributed data management. We argue that the time has come for PIMS even though the approach requires a sharp turn from previous models based on the monetisation of personal data. We consider research issues raised by PIMS, either new or that acquire a new avor in a PIMS context. We also present works on the integration of users data from different sources (such as email messages, calendar, contacts, and location history) into a PIMS. The PIMS we consider is a Knowledge Base System based on Semantic Web standards, notably RDF and schema.org. Some of the knowledge is episodical (typically related to spatio-temporal events) and some is semantic (knowledge that holds irrelative to any such event). Of particular interest is the cross enrichment of these two kinds of knowledge based on the alignment of concepts, e.g., enrichment between a calendar and a geographical map using the location history. The goal is to enable users via the PIMS to query and perform analytics over their personal information within and across different dimensions. Matching and Covering in Streaming Graphs

121 citations

Journal ArticleDOI
TL;DR: The challenge of computing in a highly dynamic environment is illustrated with a simulation of the response of the immune system to natural disasters.
Abstract: The challenge of computing in a highly dynamic environment.

90 citations

01 Jan 2016

80 citations