scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 2015"


Book ChapterDOI
TL;DR: A survey of the most important Mizari¾?features that distinguish it from other popular proof checkers is given and most important current trends and lines of development that go beyond the state-of-the-art system are described.
Abstract: Mizari¾?is one of the pioneering systems for mathematics formalization, which still has ani¾?active user community. The project has been in constant development since 1973, when Andrzej Trybulec designed the fundamentals of ai¾?language capable of rigorously encoding mathematical knowledge in ai¾?computerized environment which guarantees its full logical correctness. Since then, the system with its feature-rich language devised to approximate mathematics writing has influenced other formalization projects and has given rise to ai¾?number of Mizari¾?modes implemented on top of other systems. However, the information about the system as ai¾?whole is not readily available to developers of other systems. Various papers describing Mizari¾?features have been rather incremental and focused only on particular newly implemented Mizari¾?aspects. The objective of the current paper is to give ai¾?survey of the most important Mizari¾?features that distinguish it from other popular proof checkers. We also go ai¾?step further and describe most important current trends and lines of development that go beyond the state-of-the-art system.

171 citations


Journal Article
TL;DR: In this paper, a carefully optimized implementation of a ring-LWE encryption scheme for 8-bit AVR processors like the ATxmega128 was presented, which achieved a speedup of 590 k, 672 k, and 276 k clock cycles for key generation, encryption, and decryption, respectively.
Abstract: Public-key cryptography based on the “ring-variant” of the Learning with Errors (ring-LWE) problem is both efficient and believed to remain secure in a post-quantum world. In this paper, we introduce a carefully-optimized implementation of a ring-LWE encryption scheme for 8-bit AVR processors like the ATxmega128. Our research contributions include several optimizations for the Number Theoretic Transform (NTT) used for polynomial multiplication. More concretely, we describe the Move-and-Add (MA) and the Shift-Add-Multiply-Subtract-Subtract (SAMS2) technique to speed up the performance-critical multiplication and modular reduction of coefficients, respectively. We take advantage of incompletely-reduced intermediate results to minimize the total number of reduction operations and use a special coefficient-storage method to decrease the RAM footprint of NTT multiplications. In addition, we propose a byte-wise scanning strategy to improve the performance of a discrete Gaussian sampler based on the Knuth-Yao random walk algorithm. For medium-term security, our ring-LWE implementation needs 590 k, 672 k, and 276 k clock cycles for key-generation, encryption, and decryption, respectively. On the other hand, for long-term security, the execution time of key-generation, encryption, and decryption amount to 2.2 M, 2.6 M, and 686 k cycles, respectively. These results set new speed records for ring-LWE encryption on an 8-bit processor and outperform related RSA and ECC implementations by an order of magnitude.

56 citations


Book ChapterDOI
TL;DR: This study shows that websites can discover the capacity of users’ batteries by exploiting the high precision readouts provided by Firefox on Linux, and highlights privacy risks associated with the HTML5 Battery Status API.
Abstract: We highlight privacy risks associated with the HTML5 Battery Status API. We put special focus on its implementation in the Firefox browser. Our study shows that websites can discover the capacity of users' batteries by exploiting the high precision readouts provided by Firefox on Linux. The capacity of the battery, as well as its level, expose a fingerprintable surface that can be used to track web users in short time intervals. Our analysis shows that the risk is much higher for old or used batteries with reduced capacities, as the battery capacity may potentially serve as a tracking identifier. The fingerprintable surface of the API could be drastically reduced without any loss in the API's functionality by reducing the precision of the readings. We propose minor modifications to Battery Status API and its implementation in the Firefox browser to address the privacy issues presented in the study. Our bug report for Firefox was accepted and a fix is deployed.

54 citations


Journal Article
TL;DR: In this article, the authors revisited the stochastic shortest path problem and showed how recent results allow one to improve over the classical solutions, and presented algorithms to synthesize strategies with multiple guarantees on the distribution of the length of paths reaching a given target, rather than simply minimizing its expected value.
Abstract: In this invited contribution, we revisit the stochastic shortest path problem, and show how recent results allow one to improve over the classical solutions: we present algorithms to synthesize strategies with multiple guarantees on the distribution of the length of paths reaching a given target, rather than simply minimizing its expected value. The concepts and algorithms that we propose here are applications of more general results that have been obtained recently for Markov decision processes and that are described in a series of recent papers.

47 citations


Journal Article
TL;DR: In this paper, Even and Mansour's Even-Mansour construction has been shown to offer similar security as an ideal block cipher with the same block and key size, under multiple independent keys.
Abstract: At ASIACRYPT 1991, Even and Mansour introduced a block cipher construction based on a single permutation. Their construction has since been lauded for its simplicity, yet also criticized for not providing the same security as other block ciphers against generic attacks. In this paper, we prove that if a small number of plaintexts are encrypted under multiple independent keys, the Even-Mansour construction surprisingly offers similar security as an ideal block cipher with the same block and key size. Note that this multi-key setting is of high practical relevance, as real-world implementations often allow frequent rekeying. We hope that the results in this paper will further encourage the use of the Even-Mansour construction, especially when a secure and efficient implementation of a key schedule would result in significant overhead.

43 citations


Journal Article
TL;DR: In this article, the authors consider indistinguishability bounds for two types of keyed sponges, inner-and outer-keyed, and derive improved bounds in the classical indistinguishingability setting and in an extended setting where the adversary targets multiple instances at the same time.
Abstract: Sponge functions were originally proposed for hashing, but find increasingly more applications in keyed constructions, such as encryption and authentication Depending on how the key is used we see two main types of keyed sponges in practice: inner- and outer-keyed Earlier security bounds, mostly due to the well-known sponge indifferentiability result, guarantee a security level of c / 2 bits with c the capacity We reconsider these two keyed sponge versions and derive improved bounds in the classical indistinguishability setting as well as in an extended setting where the adversary targets multiple instances at the same time For cryptographically significant parameter values, the expected workload for an attacker to be successful in an n-target attack against the outer-keyed sponge is the minimum over \(2^k/n\) and \(2^c/\mu \) with k the key length and \(\mu \) the total maximum multiplicity For the inner-keyed sponge this simplifies to \(2^k/\mu \) with maximum security if \(k=c\) The multiplicity is a characteristic of the data available to the attacker It is at most twice the data complexity, but will be much smaller in practically relevant attack scenarios We take a modular proof approach, and our indistinguishability bounds are the sum of a bound in the PRP model and a bound on the PRP-security of Even-Mansour type block ciphers in the ideal permutation model, where we obtain the latter result by using Patarin’s H-coefficient technique

41 citations


Journal Article
TL;DR: In this paper, a cooperation model based on joint intentions introduced by Jennings is formalised within the modelling framework DESIRE for compositional multi-agent systems, by formalising the model in the DESIRE framework operationalisation and reusability of the model are obtained, as DESIRE specifications are executable and easily reusable.
Abstract: A cooperation model based on joint intentions introduced by Jennings is formalised within the modelling framework DESIRE for compositional multi-agent systems. By formalising the model in the DESIRE framework operationalisation and reusability of the model are obtained, as DESIRE specifications are executable and easily reusable.

41 citations


Book ChapterDOI
TL;DR: A genetic algorithm GA to search for plateaued boolean functions, which represent suitable candidates for the design of stream ciphers due to their good cryptographic properties, outperforms Clark et al.'s simulated annealing algorithm with respect to the ratio of generated plateaued Boolean functions per number of optimization runs.
Abstract: We propose a genetic algorithm GA to search for plateaued boolean functions, which represent suitable candidates for the design of stream ciphers due to their good cryptographic properties. Using the spectral inversion technique introduced by Clark, Jacob, Maitra and Stanica, our GA encodes the chromosome of a candidate solution as a permutation of a three-valued Walsh spectrum. Additionally, we design specialized crossover and mutation operators so that the swapped positions in the offspring chromosomes correspond to different values in the resulting Walsh spectra. Some tests performed on the set of pseudoboolean functions of $$n=6$$ and $$n=7$$ variables show that in the former case our GA outperforms Clark et al.'s simulated annealing algorithm with respect to the ratio of generated plateaued boolean functions per number of optimization runs.

33 citations


Journal Article
TL;DR: In this paper, the authors explore a security versus efficiency tradeoff and provide an improved and tweaked inner product masking, which is shown to be less secure than the original inner product but more secure than Boolean masking.
Abstract: Masking is a popular countermeasure against side channel attacks. Many practical works use Boolean masking because of its simplicity, ease of implementation and comparably low performance overhead. Some recent works have explored masking schemes with higher algebraic complexity and have shown that they provide more security than Boolean masking at the cost of higher overheads. In particular, masking based on the inner product was shown to be practical, albeit not efficient, for a small security parameter, and at the same time provable secure in the domain of leakage resilient cryptography for a large security parameter. In this work we explore a security versus efficiency tradeoff and provide an improved and tweaked inner product masking. Our practical security evaluation shows that it is less secure than the original inner product masking but more secure than Boolean masking. Our performance evaluation shows that our scheme is only four times slower than Boolean masking and more than two times faster than the original inner product masking. Besides the practical security analysis we prove the security of our scheme and its masked operations in the threshold probing model.

31 citations


Book ChapterDOI
TL;DR: A software development life cycle that helps developers to engineer adaptive behavior and to address the issues posed by the diversity of self-* properties is proposed and a pattern catalog for the development of collective autonomic systems is presented to ease the engineering process.
Abstract: Collective autonomic systems are adaptive, open-ended, highly parallel, interactive and distributed software systems. Their key features are so-called self-* properties, such as self-awareness, self-adaptation, self-expression, self-healing and self-management. We propose a software development life cycle that helps developers to engineer adaptive behavior and to address the issues posed by the diversity of self-* properties. The life cycle is characterized by three feedback loops, i.e. based on verification at design time, based on monitoring and awareness in the runtime, and the feedback provided by runtime data to the design phases. We illustrate how the life cycle can be instantiated using specific languages, methods and tools developed within the ASCENS project. In addition, a pattern catalog for the development of collective autonomic systems is presented to ease the engineering process.

31 citations


Journal Article
TL;DR: In this paper, the interpolation attacks on LowMC have been studied, and it was shown that a practically significant fraction of 2−38 of its 80-bit key instances could be broken 2 times faster than exhaustive search.
Abstract: LowMC is a collection of block cipher families introduced at Eurocrypt 2015 by Albrecht et al. Its design is optimized for instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. A unique feature of LowMC is that its internal affine layers are chosen at random, and thus each block cipher family contains a huge number of instances. The Eurocrypt paper proposed two specific block cipher families of LowMC, having 80-bit and 128-bit keys. In this paper, we mount interpolation attacks (algebraic attacks introduced by Jakobsen and Knudsen) on LowMC, and show that a practically significant fraction of 2−38 of its 80-bit key instances could be broken 2 times faster than exhaustive search. Moreover, essentially all instances that are claimed to provide 128-bit security could be broken about 1000 times faster. In order to obtain these results, we had to develop novel techniques and optimize the original interpolation attack in new ways. While some of our new techniques exploit specific internal properties of LowMC, others are more generic and could be applied, in principle, to any block cipher.

Journal Article
TL;DR: In this article, it was shown that the Ishai-Sahai-Wagner private circuit construction is closely related to threshold implementations and the Trichina gate, and the implications of this observation are manifold.
Abstract: In this paper we investigate relations between several masking schemes. We show that the Ishai–Sahai–Wagner private circuits construction is closely related to Threshold Implementations and the Trichina gate. The implications of this observation are manifold. We point out a higher-order weakness in higher-order Threshold Implementations, suggest a mitigation and provide new sharings that use a lower number of input shares.

Journal Article
TL;DR: The proposed delineation algorithm is suited for any delineation problem and employs a set of B-COSFIRE filters selective for lines and line-endings of different thickness, and is demonstrated to be highly effective and efficient.

BookDOI
TL;DR: The 36th International Conference on Application and Theory of Petri nets and Concurrency (Petri Nets 2015) as mentioned in this paper was organized by the Departement d'Informatique (Science Faculty) of the Universite Libre de Bruxelles (ULB) and took place in Brussels, Belgium, June 21 -- 26, 2015.
Abstract: This volume constitutes the proceedings of the 36th International Conferenceon Application and Theory of Petri Nets and Concurrency (Petri Nets 2015).This series of conferences serves as an annual meeting place to discuss progressin the field of Petri nets and related models of concurrency. These conferences provide aforum for researchers to present and discuss both applications and theoreticaldevelopments in this area. Novel tools and substantial enhancements to existingtools can also be presented. This year, the satellite program of the conferencecomprised four workshops, two Petri net courses, two advanced tutorials, and amodel checking contest.Petri Nets 2015 was co-located with the Application of Concurrency to System Design Conference (ACSD 2015). Both were organized by the Departement d'Informatique (Science Faculty) of the Universite Libre de Bruxelles (ULB) and took place in Brussels, Belgium, June 21 -- 26, 2015. We would like to express ourdeepest thanks to the Organizing Committee chaired by Gilles Geeraerts for thetime and effort invested in the local organization of the conference.This year, 30 regular papers and 4 tool papers were submitted to Petri Nets 2015. The authors of the submitted papers represented 21 different countries. We thank all the authors. Each paper was reviewed by at least three referees. The Program Committee (PC) meeting took place electronically, using the EasyChair conference system for the paper selection process. The PC selected 12 regular papers and 2 tool papers for presentation. After the conference, some authors were invited to submit an extendedversion of their contribution for consideration in a special issue ofthe Fundamenta Informaticae journal.We thank the PC members and other reviewers for their careful and timelyevaluation of the submissions before the meeting, and the fruitful discussionsduring the electronic meeting. The Springer LNCS team and the EasyChair systemprovided excellent support in the preparation of this volume. We are also grateful to the invited speakers for their contribution: Marta Kwiatkowska (On quantitative modelling and verification of DNA walker circuits using stochastic Petri nets), Marlon Dumas (Process Mining Reloaded: Event Structures as a Unified Representation of Process Models and Event Logs),Robert Lorenz (Modeling Quantitative Aspects of Concurrent Systems using Weighted Petri Net Transducers).

Book ChapterDOI
TL;DR: By using Augmented Reality technology, mobile application and High Resolution visualization the authors provide the users with a visual augmentation of their surroundings and a touch interaction technique to display digital contents for cultural heritage promotion, allowing museum visitors to interact with digital contents in an intuitive and exciting manner.
Abstract: In this paper, an interactive installation system for the enjoyment of the cultural heritage in a real case museum environment is presented. By using Augmented Reality technology, mobile application and High Resolution visualization we provide the users with a visual augmentation of their surroundings and a touch interaction technique to display digital contents for cultural heritage promotion, allowing museum visitors to interact with digital contents in an intuitive and exciting manner. The exhibition here presented is the result of previous research over the use of new technologies e.g. Augmented Reality for Cultural Heritage promotion. Descriptions of the hardware system component and software development details are presented, with particular focus over the application implementation. Furthermore, we outline a possible Multimedia AR Installation connected with a semantic network.

BookDOI
TL;DR: This work aims at overcoming inefficiency by designing a distributed parallel system architecture that improves the performance of SPARQL endpoints by incorporating two functionalities: a queuing system to avoid bottlenecks during the execution of SParQL queries; and an intelligent relaxation of the queries submitted to the endpoint at hand whenever the relaxation itself and the consequently lowered complexity of the query are beneficial for the overall performance of the system.
Abstract: The Web of Data is widely considered as one of the major global repositories populated with countless interconnected and structured data prompting these linked datasets to be continuously and sharply increasing. In this context the so-called SPARQL Protocol and RDF Query Language is commonly used to retrieve and manage stored data by means of SPARQL endpoints, a query processing service especially designed to get access to these databases. Nevertheless, due to the large amount of data tackled by such endpoints and their structural complexity, these services usually suffer from severe performance issues, including inadmissible processing times. This work aims at overcoming this noted inefficiency by designing a distributed parallel system architecture that improves the performance of SPARQL endpoints by incorporating two functionalities: (1) a queuing system to avoid bottlenecks during the execution of SPARQL queries; and (2) an intelligent relaxation of the queries submitted to the endpoint at hand whenever the relaxation itself and the consequently lowered complexity of the query are beneficial for the overall performance of the system. To this end the system relies on a two-fold optimization criterion: the minimization of the query running time, as predicted by a supervised learning model; and the maximization of the quality of the results of the query as quantified by a measure of similarity. These two conflicting optimization criteria are efficiently balanced by two bi-objective heuristic algorithms sequentially executed over groups of SPARQL queries. The approach is validated on a prototype and several experiments that evince the applicability of the proposed scheme.

Book ChapterDOI
TL;DR: In this paper, the authors provide a basis theorem for common cancellation meadows of characteristic zero, i.e., fields with a total multiplicative inverse function that admit a certain cancellation law.
Abstract: Common meadows are fields expanded with a total multiplicative inverse function. Division by zero produces an additional value denoted with “\({\textup{\textbf{a}}}\)” that propagates through all operations of the meadow signature (this additional value can be interpreted as an error element). We provide a basis theorem for so-called common cancellation meadows of characteristic zero, that is, common meadows of characteristic zero that admit a certain cancellation law.

Journal Article
TL;DR: Projected model counting as mentioned in this paper is a model counting problem where some parts of the model are irrelevant to the counts, in particular when we require additional variables to model the problem we are counting in SAT.
Abstract: Model counting is the task of computing the number of assignments to variables V that satisfy a given propositional theory F. The model counting problem is denoted as #SAT. Model counting is an essential tool in probabilistic reasoning. In this paper, we introduce the problem of model counting projected on a subset of original variables that we call priority variables P ⊆ V. The task is to compute the number of assignments to P such that there exists an extension to non-priority variables V\P that satisfies F. We denote this as #∃SAT. Projected model counting arises when some parts of the model are irrelevant to the counts, in particular when we require additional variables to model the problem we are counting in SAT. We discuss three different approaches to #∃SAT (two of which are novel), and compare their performance on different benchmark problems.

Book ChapterDOI
TL;DR: After defining the syntax and (possible worlds) semantics of some higher-order modal logics, it is shown that they can be embedded into classical higher- order logic by systematically lifting the types of propositions, making them depend on a new atomic type for possible worlds.
Abstract: These are the lecture notes of a tutorial on higher-order modal logics held at the 11th Reasoning Web Summer School. After defining the syntax and (possible worlds) semantics of some higher-order modal logics, we show that they can be embedded into classical higher-order logic by systematically lifting the types of propositions, making them depend on a new atomic type for possible worlds. This approach allows several well-established automated and interactive reasoning tools for classical higher-order logic to be applied also to modal higher-order logic problems. Moreover, also meta reasoning about the embedded modal logics becomes possible. Finally, we illustrate how our approach can be useful for reasoning with web logics and expressive ontologies, and we also sketch a possible solution for handling inconsistent data.

Book ChapterDOI
TL;DR: A model is proposed, formalized in the Maude rewriting logic system, that allows experimenting with and reasoning about designs of open distributed systems consisting of multiple cyber-physical agents, specifically, where a coherent global view is unattainable and timely consensus is impossible.
Abstract: We are interested in principles for designing and building open distributed systems consisting of multiple cyber-physical agents, specifically, where a coherent global view is unattainable and timely consensus is impossible. Such agents attempt to contribute to a system goal by making local decisions to sense and effect their environment based on local information. In this paper we propose a model, formalized in the Maude rewriting logic system, that allows experimenting with and reasoning about designs of such systems. Features of the model include communication via sharing of partially ordered knowledge, making explicit the physical state as well as the cyber perception of this state, and the use of a notion of soft constraints developed by Martin Wirsing and his team to specify agent behavior. The paper begins with a discussion of desiderata for such models and concludes with a small case study to illustrate the use of the modeling framework.

Book ChapterDOI
TL;DR: Arity hierarchy is shown to be strict by relating the question to the study of arity hierarchies in fixed-point logics.
Abstract: We study the expressive power of fragments of inclusion logic under the so-called lax team semantics. The fragments are defined either by restricting the number of universal quantifiers or the arity of inclusion atoms in formulae. In case of universal quantifiers, the corresponding hierarchy collapses at the first level. Arity hierarchy is shown to be strict by relating the question to the study of arity hierarchies in fixed-point logics.

Journal Article
TL;DR: The best known algorithm for the Frechet distance has quadratic time complexity, which has been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann, FOCS'14] as discussed by the authors.
Abstract: The Frechet distance is a well studied and very popular measure of similarity of two curves. The best known algorithms have quadratic time complexity, which has recently been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann, FOCS'14]. To overcome the worst-case quadratic time barrier, restricted classes of curves have been studied that attempt to capture realistic input curves. The most popular such class are c-packed curves, for which the Frechet distance has a (1 + 𝜀)-approximation in time 𝒪(cn/𝜀 + cnlog n) [Driemel et al., DCG'12]. In dimension d ≥ 5 this cannot be improved to 𝒪((cn/𝜀)1−δ) for any δ > 0 unless SETH fails [Bringmann, FOCS'14]. In this paper, exploiting properties that prevent stronger lower bounds, we present an improved algorithm with time complexity 𝒪(cnlog2(1/𝜀)/𝜀 + cnlog n). This improves upon the algorithm by Driemel et al. for any 𝜀 ≪ 1/log n. Moreover, our algorithm's dependence on c, n and 𝜀 is optimal in high dimensions apart from lower orde...

Book ChapterDOI
TL;DR: This chapter presents a disaster recovery scenario that has been used throughout the ASCENS project as a reference to coordinate the study of distributed algorithms for robot ensembles, highlighting its generality as a framework to compare algorithms and methodologies for distributed robotics.
Abstract: This chapter presents a disaster recovery scenario that has been used throughout the ASCENS project as a reference to coordinate the study of distributed algorithms for robot ensembles. We first introduce the main traits and open problems in the design of behaviors for robot ensembles. We then present the scenario, highlighting its generality as a framework to compare algorithms and methodologies for distributed robotics. Subsequently, we summarize the main results of the research conducted in ASCENS that used the scenario. Finally, we describe an example algorithm that solves a selected problem in the scenario. The algorithm demonstrates how awareness at the ensemble level can be obtained without requiring awareness at the individual level.

Journal Article
TL;DR: In this paper, Mandal et al. showed that the XOR construction achieves optimal 2 n/3 security for all k = 2 for the case k ≥ 2. And they showed that this is the best one can hope for under the assumption that the underlying permutations are public.
Abstract: A straightforward way of constructing an n-bit pseudorandom function is to XOR two or more pseudorandom permutations: \(p_1\oplus \ldots \oplus p_k\). This XOR construction has gained broad attention over the last two decades. In this work, we revisit the security of this well-established construction. We consider the case where the underlying permutations are considered secret, as well as the case where these permutations are publicly available to the adversary. In the secret permutation setting, we present a simple reduction showing that the XOR construction achieves optimal \(2^n\) security for all \(k\ge 2\), therewith improving a recent result of Cogliati et al. (FSE 2014). Regarding the public permutation setting, Mandal et al. (INDOCRYPT 2010) proved \(2^{2n/3}\) security for the case \(k=2\), but we point out the existence of a non-trivial flaw in the proof. We re-establish and generalize the claimed security bound for general \(k\ge 2\) using a different proof approach.

Book ChapterDOI
TL;DR: The study demonstrates that the thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains provides an efficient tool for detecting abrupt changes and characterizing different stages in evolving network evolution.
Abstract: In this paper, we present a novel and effective method for better understanding the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure We commence from the spectrum of the normalized Laplacian of a network We show that by defining the normalized Laplacian eigenvalues as the microstate occupation probabilities of a complex system, the recently developed von Neumann entropy can be interpreted as the thermodynamic entropy of the network Then, we give an expression for the internal energy of a network and derive a formula for the network temperature as the ratio of change of entropy and change in energy We show how these thermodynamic variables can be computed in terms of node degree statistics for nodes connected by edges We apply the thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in evolving network evolution

Journal Article
TL;DR: The results showed that for most participants, the overall heart rate variability were improved after breathing training and “Breathe with Touch” brought users better satisfaction during the exercise.
Abstract: Breathing techniques have been widely used as an aid in stress-reduction and relaxation exercises. Most breathing assistance systems present breathing guidance in visual or auditory forms. In this study, we explored a tactile interface of a breathing assistance system by using a shape-changing airbag. We hypothesized that it would help users perform the breathing exercise more effectively and enhance their relaxing experience. The feasibility of the tactile interface was evaluated from three aspects: stress reduction, breathing training and interface usability. The results showed that for most participants, the overall heart rate variability were improved after breathing training. Moreover, “Breathe with Touch” brought users better satisfaction during the exercise. We discuss these results and future design implications for designing tactile interfaces for breathing guidance.

Book ChapterDOI
TL;DR: It is shown that the axioms proposed by Gabbay and Ciancia are not complete over the semantic interpretation they propose, and a slightly wider class of language models are identified over which they are sound and complete.
Abstract: Gabbay and Ciancia (2011) presented a nominal extension of Kleene algebra as a framework for trace semantics with statically scoped allocation of resources, along with a semantics consisting of nominal languages. They also provided an axiomatization that captures the behavior of the scoping operator and its interaction with the Kleene algebra operators and proved soundness over nominal languages. In this paper, we show that the axioms proposed by Gabbay and Ciancia are not complete over the semantic interpretation they propose. We then identify a slightly wider class of language models over which they are sound and complete.

Journal Article
TL;DR: Discrete and continuous optimization of image restoration and inpainting, motion, tracking and multiview reconstruction, and medical image analysis.
Abstract: Discrete and continuous optimization.- Image restoration and inpainting.- Segmentation.- PDE and variational methods.- Motion, tracking and multiview reconstruction.- Statistical methods and learning.- Medical image analysis.

Book ChapterDOI
TL;DR: This paper provides the first comparative study of the performance of various BDD/MTBDD packages for this purpose, providing experimental results for several well-known probabilistic benchmarks and study the effect of several optimisations.
Abstract: Symbolic data structures using Binary Decision Diagrams BDDs have been successfully used in the last decades to analyse large systems. While various BDD and MTBDD packages have been developed in the community, the CUDD package remains the default choice of most of the symbolic probabilistic model checkers. In this paper, we provide the first comparative study of the performance of various BDD/MTBDD packages for this purpose. We provide experimental results for several well-known probabilistic benchmarks and study the effect of several optimisations. Our experiments show that no BDD package dominates on a single core, but that parallelisation yields significant speedups.

Journal Article
TL;DR: A simple entropy regularization for topic selection in terms of Additive Regularization of Topic Models (ARTM) is proposed, a multicriteria approach for combining regularizers.
Abstract: Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. Determining the optimal number of topics remains a challenging problem in topic modeling. We propose a simple entropy regularization for topic selection in terms of Additive Regularization of Topic Models (ARTM), a multicriteria approach for combining regularizers. The entropy regularization gradually eliminates insignificant and linearly dependent topics. This process converges to the correct value on semi-real data. On real text collections it can be combined with sparsing, smoothing and decorrelation regularizers to produce a sequence of models with different numbers of well interpretable topics.