scispace - formally typeset
Search or ask a question

Showing papers presented at "Dagstuhl Seminar Proceedings in 2010"


BookDOI
01 Jan 2010
TL;DR: In this article, the critical cycle theorem of Mader was used to obtain a 1+k-1/n approximation for the min cost vertex k-connected subgraph problem in the metric case.
Abstract: We survey approximation algorithms of connectivity problems. The survey presented describing various techniques. In the talk the following techniques and results are presented. 1)Outconnectivity: Its well known that there exists a polynomial time algorithm to solve the problems of finding an edge k-outconnected from r subgraph [EDMONDS] and a vertex k-outconnectivity subgraph from r [Frank-Tardos] . We show how to use this to obtain a ratio 2 approximation for the min cost edge k-connectivity problem. 2)The critical cycle theorem of Mader: We state a fundamental theorem of Mader and use it to provide a 1+(k-1)/n ratio approximation for the min cost vertex k-connected subgraph, in the metric case. We also show results for the min power vertex k-connected problem using this lemma. We show that the min power is equivalent to the min-cost case with respect to approximation. 3)Laminarity and uncrossing: We use the well known laminarity of a BFS solution and show a simple new proof due to Ravi et al for Jain's 2 approximation for Steiner network.

121 citations


Proceedings Article
01 Jun 2010
TL;DR: In this article, the first strongly polynomial time algorithm for computing an equilibrium for the linear utilities case of Fisher's market model was given, which runs in O(n 4 logUmax +n 3 emax) time.
Abstract: We give the rst strongly polynomial time algorithm for computing an equilibrium for the linear utilities case of Fisher’s market model We consider a problem with a set B of buyers and a set G of divisible goods Each buyer i starts with an initial integral allocation ei of money The integral utility for buyer i of good j is Uij We rst develop a weakly polynomial time algorithm that runs in O(n 4 logUmax +n 3 emax) time, where n =jBj +jGj We further modify the algorithm so that it runs in O(n 4 logn) time These algorithms improve upon the previous best running time of O(n 8 logUmax +n 7 logemax), due to Devanur et al [5]

87 citations


Proceedings Article
01 Jan 2010
TL;DR: The stabilizing consensus problem as mentioned in this paper is a variant of the standard consensus problem that does not require that each process commits to a final value at some point, but that eventually they arrive at a common value without necessarily being aware of that.
Abstract: Consensus problems occur in many contexts and have therefore been intensively studied in the past. In the standard consensus problem there are n processes with possibly different input values and the goal is to eventually reach a point at which all processes commit to exactly one of these values. We are studying a slight variant of the consensus problem called the stabilizing consensus problem. In this problem, we do not require that each process commits to a final value at some point, but that eventually they arrive at a common value without necessarily being aware of that. This should work irrespective of the states in which the processes are starting. Coming up with a self-stabilizing rule is easy without adversarial involvement, but we allow some T-bounded adversary to manipulate any T processes at any time. In this situation, a perfect consensus is impossible to reach, so we only require that there is a time point t and value v so that at any point after t, all but up to O(T) processes agree on v, which we call an almost stable consensus. As we will demonstrate, there is a surprisingly simple rule for the standard message passing model that just needs O(log n loglog n) time for any sqrt{n}-bounded adversary and just O(log n) time without adversarial involvement, with high probability, to reach an (almost) stable consensus from any initial state. A stable consensus is reached, with high probability, in the absence of adversarial involvement.

80 citations


Proceedings Article
01 Jan 2010
TL;DR: This work will present an algorithm that is able to find excuses and demonstrate that such excuses can be found in practical settings in reasonable time.
Abstract: can go wrong. First and foremost, an agent might fail to execute one of the planned actions for some reasons. Even more annoying, however, is a situation where the agent is incompetent, i.e., unable to come up with a plan. This might be due to the fact that there are principal reasons that prohibit a successful plan or simply because the task’s description is incomplete or incorrect. In either case, an explanation for such a failure would be very helpful. We will address this problem and provide a formalization of coming up with excuses for not being able to find a plan. Based on that, we will present an algorithm that is able to find excuses and demonstrate that such excuses can be found in practical settings in reasonable time.

78 citations


Proceedings Article
01 Jan 2010
TL;DR: In this paper, the authors show that tree canonization in planar graphs is in log-space, which matches the best known lower and upper bounds on the complexity of tree canonicalization.
Abstract: Graph Isomorphism is the prime example of a computational problem with a wide difference between the best known lower and upper bounds on its complexity. There is a significant gap between extant lower and upper bounds for planar graphs as well. We bridge the gap for this natural and important special case by presenting an upper bound that matches the known log-space hardness [JKMT03]. In fact, we show the formally stronger result that planar graph canonization is in log-space. This improves the previously known upper bound of AC1 [MR91]. Our algorithm first constructs the biconnected component tree of a connected planar graph and then refines each biconnected component into a triconnected component tree. The next step is to log-space reduce the biconnected planar graph isomorphism and canonization problems to those for 3-connected planar graphs, which are known to be in log-space by [DLN08]. This is achieved by using the above decomposition, and by making significant modifications to Lindell’s algorithm for tree canonization, along with changes in the space complexity analysis. The reduction from the connected case to the biconnected case requires further new ideas including a non-trivial case analysis and a group theoretic lemma to bound the number of automorphisms of a colored 3-connected planar graph.

74 citations


Proceedings Article
01 Jan 2010
TL;DR: This paper discusses how existing technologies for wrapping and querying streams in the RDF data format should be extended toward richer forms of reasoning using Sensor Networks as a motivating example.
Abstract: Stream Data processing has become a popular topic in database research addressing the challenge of efficiently answering queries over continuous data streams. Meanwhile data streams have become more and more important as a basis for higher level decision processes that require complex reasoning over data streams and rich background knowledge. In previous work the foundation for complex reasoning over streams and background knowledge was laid by introducing technologies for wrapping and querying streams in the RDF data format and by supporting simple forms of reasoning in terms of incremental view maintenance. In this paper, we discuss how this existing technologies should be extended toward richer forms of reasoning using Sensor Networks as a motivating example.

59 citations


Journal ArticleDOI
01 Apr 2010
TL;DR: In this paper, a characterization of operators that can be realized as Gabor multipliers is given and necessary conditions for the existence of (Hilbert-Schmidt) optimal Gabor multiplier approximations are discussed.
Abstract: Starting from a general operator representation in the time-frequency domain, this paper addresses the problem of approximating linear operators by operators that are diagonal or band-diagonal with respect to Gabor frames. A characterization of operators that can be realized as Gabor multipliers is given and necessary conditions for the existence of (Hilbert-Schmidt) optimal Gabor multiplier approximations are discussed and an efficient method for the calculation of an operator’s best approximation by a Gabor multiplier is derived. The spreading function of Gabor multipliers yields new error estimates for these approximations. Generalizations (multiple Gabor multipliers) are introduced for better approximation of overspread operators. The Riesz property of the projection operators involved in generalized Gabor multipliers is characterized, and a method for obtaining an operator’s best approximation by a multiple Gabor multiplier is suggested. Finally, it is shown that in certain situations, generalized Gabor multipliers reduce to a finite sum of regular Gabor multipliers with adapted windows.

51 citations


Proceedings Article
01 Jan 2010
TL;DR: In this article, the authors proposed a method for general purpose robotic manipulation using a combination of planning and motion planning, which considers kinematic constraints in configuration space (C-space) together with constraints over object manipulations.
Abstract: Robotic manipulation is important for real, physical world applications General Purpose manipulation with a robot (eg delivering dishes, opening doors with a key, etc) is demanding It is hard because (1) objects are constrained in position and orientation, (2) many non-spatial constraints interact (or interfere) with each other, and (3) robots may have multidegree of freedoms (DOF) In this paper we solve the problem of general purpose robotic manipulation using a novel combination of planning and motion planning Our approach integrates motions of a robot with other (non-physical or external-to-robot) actions to achieve a goal while manipulating objects It differs from previous, hierarchical approaches in that (a) it considers kinematic constraints in configuration space (C-space) together with constraints over object manipulations; (b) it automatically generates high-level (logical) actions from a C-space based motion planning algorithm; and (c) it decomposes a planning problem into small segments, thus reducing the complexity of planning

44 citations


Proceedings Article
01 Jan 2010
TL;DR: For example, this article showed that for local-effect actions, progression is always first-order definable and computable and gave a simple proof for this via the concept of forgetting.
Abstract: In a seminal paper, Lin and Reiter introduced the notion of progression for basic action theories in the situation calculus. Unfortunately, progression is not first-order definable in general. Recently, Vassos, Lakemeyer, and Levesque showed that in case actions have only local effects, progression is firstorder representable. However, they could show computability of the first-order representation only for a restricted class. Also, their proofs were quite involved. In this paper, we present a result stronger than theirs that for local-effect actions, progression is always first-order definable and computable. We give a very simple proof for this via the concept of forgetting. We also show first-order definability and computability results for a class of knowledge bases and actions with non-local effects. Moreover, for a certain class of local-effect actions and knowledge bases for representing disjunctive information, we show that progression is not only firstorder definable but also efficiently computable.

43 citations


Journal ArticleDOI
01 Nov 2010
TL;DR: In this article, the diameter of a natural abstraction of the 1-skeleton of polyhedra was investigated, and it was shown that this abstraction has its limits by providing an almost quadratic lower bound.
Abstract: We investigate the diameter of a natural abstraction of the 1-skeleton of polyhedra. Even if this abstraction is more general than other abstractions previously studied in the literature, known upper bounds on the diameter of polyhedra continue to hold here. On the other hand, we show that this abstraction has its limits by providing an almost quadratic lower bound.

43 citations


Proceedings Article
01 Jan 2010
TL;DR: This work presents a new distance formula based on a simple tree structure that captures all the delicate features of this problem in a unifying way, and a linear-time algorithm for computing this distance.
Abstract: The genomic distance problem in the Hannenhalli-Pevzner (HP) theory is the following: Given two genomes whose chromosomes are linear, calculate the minimum number of translocations, fusions, fissions and inversions that transform one genome into the other. We will present a new distance formula based on a simple tree structure that captures all the delicate features of this problem in a unifying way, and a linear-time algorithm for computing this distance.

Proceedings Article
01 Jan 2010
TL;DR: It is shown that many such problems indeed have good approximation algorithms that preserve differential privacy, even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the value of an optimal solution, let alone the entire solution.
Abstract: Consider the following problem: given a metric space, some of whose points are ``clients,'' select a set of at most $k$ facility locations to minimize the average distance from the clients to their nearest facility. This is just the well-studied $k$-median problem, for which many approximation algorithms and hardness results are known. Note that the objective function encourages opening facilities in areas where there are many clients, and given a solution, it is often possible to get a good idea of where the clients are located. This raises the following quandary: what if the locations of the clients are sensitive information that we would like to keep private? emph{Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?} In this paper, we initiate a systematic study of algorithms for discrete optimization problems in the framework of differential privacy (which formalizes the idea of protecting the privacy of individual input elements). We show that many such problems indeed have good approximation algorithms that preserve differential privacy; this is even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the emph{value} of an optimal solution, let alone the entire solution. Apart from the $k$-median problem, we consider the problems of vertex and set cover, min-cut, facility location, and Steiner tree, and give approximation algorithms and lower bounds for these problems. We also consider the recently introduced submodular maximization problem, ``Combinatorial Public Projects'' (CPP), shown by Papadimitriou et al. cite{PSS08} to be inapproximable to subpolynomial multiplicative factors by any efficient and emph{truthful} algorithm. We give a differentially private (and hence approximately truthful) algorithm that achieves a logarithmic additive approximation. Joint work with Anupam Gupta, Katrina Ligett, Frank McSherry and Aaron Roth.

Proceedings Article
01 Jan 2010
TL;DR: This document summarizes the findings of the second Dagstuhl seminar on event processing, a comprehensive document that would explain event processing and how it relates to other technologies and suggest future work in terms of standards, challenges, and shorter-term research projects.
Abstract: The second Dagstuhl seminar on event processing took place in May 2010. This five-day meeting was oriented to work toward a comprehensive document that would explain event processing and how it relates to other technologies and suggest future work in terms of standards, challenges, and shorter-term research projects. The 45 participants came from academia and industry, some of them out of the event processing field. The teams continued the work after the conference and have summarized their findings in this document. The chapters were written by different teams and then edited for consistency.

Journal ArticleDOI
01 Jan 2010
TL;DR: In this article, the authors improved the lower bound of the linearity test by an additive constant that depends only on the ϵ-weight distribution of ϵ, which is based on the weight distribution of a coset code of the Hadamard code.
Abstract: For Boolean functions that are $\epsilon$-far from the set of linear functions, we study the lower bound on the rejection probability (denoted by $\textsc{rej}(\epsilon)$) of the linearity test suggested by Blum, Luby, and Rubinfeld [J. Comput. System Sci., 47 (1993), pp. 549-595]. This problem is arguably the most fundamental and extensively studied problem in property testing of Boolean functions. The previously best bounds for $\textsc{rej}(\epsilon)$ were obtained by Bellare et al. [IEEE Trans. Inform. Theory, 42 (1996), pp. 1781-1795]. They used Fourier analysis to show that $\textsc{rej}(\epsilon)\geq\epsilon$ for every $0\leq\epsilon\leq1/2$. They also conjectured that this bound might not be tight for $\epsilon$'s which are close to $1/2$. In this paper we show that this indeed is the case. Specifically, we improve the lower bound of $\textsc{rej}(\epsilon)\geq\epsilon$ by an additive constant that depends only on $\epsilon$: $\textsc{rej}(\epsilon)\geq\epsilon+\min\{1376\epsilon^{3}(1-2\epsilon)^{12},\frac{1}{4}\epsilon(1-2\epsilon)^{4}\}$, for every $0\leq\epsilon\leq1/2$. Our analysis is based on a relationship between $\textsc{rej}(\epsilon)$ and the weight distribution of a coset code of the Hadamard code. We use both Fourier analysis and coding theory tools to estimate this weight distribution.

Book ChapterDOI
Darko Kirovski1
01 Jan 2010
TL;DR: The first counterfeit detection procedure was proposed by as discussed by the authors, with an objective to test the purity of the inner structure of the coin and the appearance of counterfeit coins with already engraved fake test cuts initiated the cat-and-mouse game between counterfeiters and original manufacturers that has lasted to date.
Abstract: Counterfeiting is as old as the human desire to create objects of value. For example, historians have identified counterfeit coins just as old as the corresponding originals. Archeological findings have identified examples of counterfeit coins from 500 BC netting a 600+% instant profit to the counterfeiter [2]. Test cuts were likely to be the first counterfeit detection procedure – with an objective to test the purity of the inner structure of the coin. The appearance of counterfeit coins with already engraved fake test cuts initiated the cat-and-mouse game between counterfeiters and original manufacturers that has lasted to date [2].

Proceedings Article
01 Jan 2010
TL;DR: In this paper, a geometric language for quantum protocols and algorithms is presented, which can also be used to explore simple nonstandard models, such as quantum channels and hidden subgroup algorithms.
Abstract: Modern cryptography is based on various assumptions about computational hardness and feasibility. But while computability is a very robust notion (cf Church's Thesis), feasibility seems quite sensitive to the available computational resources. A prime example are, of course, quantum channels, which provide feasible solutions of some otherwise hard problems; but ants' pheromones, used as a computational resource, also provide feasible solutions of other hard problems. So at least in principle, modern cryptography is concerned with the power and availability of computational resources. The standard models, used in cryptography and in quantum computation, leave a lot to be desired in this respect. They do, of course, support many interesting solutions of deep problems; but besides the fundamental computational structures, they also capture some low level features of particular implementations. In technical terms of program semantics, our standard models are not *fully abstract*. (Related objections can be traced back to von Neumann's "I don't believe in Hilbert spaces" letters from 1937.) I shall report on some explorations towards extending the modeling tools of program semantics to develop a geometric language for quantum protocols and algorithms. Besides hiding the irrelevant implementation details, its abstract descriptions can also be used to explore simple nonstandard models. If the time permits, I shall describe a method to implement teleportation, as well as the hidden subgroup algorithms, using just abelian groups and relations.

Proceedings Article
01 Jan 2010
TL;DR: In this paper, the authors propose SAM, an architecture which leverages temporal knowledge represented as relations in Allen's interval and constraint-based temporal planning for human activity recognition and planning for controlling invasive actuation devices.
Abstract: In this paper we address the problem of realizing a service-providing reasoning infrastructure for proactive human assistance in intelligent environments. We propose SAM, an architecture which leverages temporal knowledge represented as relations in Allen’s interval algebra and constraint-based temporal planning techniques. SAM seamlessly combines two key capabilities for contextualized service provision, namely human activity recognition and planning for controlling pervasive actuation devices.

Proceedings Article
01 Jan 2010
TL;DR: The purpose of this note is to illustrate the use of step-indexing combined with biorthogonality to construct syntactical logical relations in the untyped call-by-valuelambda-calculus with recursively defined functions.
Abstract: The purpose of this note is to illustrate the use of step-indexing combined with biorthogonality to construct syntactical logical relations. It walks through the details of a syntactically simple, yet non-trivial example: a proof of the "CIU Theorem'' for contextual equivalence in the untyped call-by-value $lambda$-calculus with recursively defined functions.

Proceedings Article
01 Jan 2010
TL;DR: A method for dynamic partitioning of the state space is introduced and it is shown that it leads to improved search performance in solving STRIPS planning problems.
Abstract: State-of-the-art external-memory graph search algorithms rely on a hash function, or equivalently, a state-space projection function, that partitions the stored nodes of the state-space search graph into groups of nodes that are stored as separate files on disk. The scalability and efficiency of the search depends on properties of the partition: whether the number of unique nodes in a file always fits in RAM, the number of files into which the nodes of the state-space graph are partitioned, and how well the partitioning of the state space captures local structure in the graph. All previous work relies on a static partitioning of the state space. In this paper, we introduce a method for dynamic partitioning of the state-space search graph and show that it leads to substantial improvement of search performance.

Proceedings Article
Christian Müller1
01 Jan 2010
TL;DR: This extended abstract summarizes the common concepts in adaptive MCMC and co- variance matrix adaptation schemes and presents how both types of methods can be unified within the Gaussian Adaptation framework and proposes a unification of both fields as “grand challenge” for future research.
Abstract: In the field of scientific modeling, one is often confronted with the task of drawing samples from a probability distribution that is only known up to a normalizing constant and for which no direct analytical method for sample generation is available. Since the past decade, adaptive Markov Chain Monte Carlo (MCMC) methods gained considerable attention in the statistics community in order to tackle this black-box (or indirect) sampling scenario. Common application domains are Bayesian statistics and statistical physics. Adaptive MCMC methods try to learn an optimal proposal distribution from previously accepted samples in order to efficiently explore the target distribution. Variable metric ap- proaches in black-box optimization, such as the Evolution Strategy with covariance matrix adaptation (CMA-ES) and Gaussian Adaption (GaA), use almost identical ideas to locate putative global optima. This extended abstract summarizes the common concepts in adaptive MCMC and co- variance matrix adaptation schemes. We also present how both types of methods can be unified within the Gaussian Adaptation framework and propose a unification of both fields as “grand challenge” for future research.

Proceedings Article
01 Jan 2010
TL;DR: In this paper, it was shown that if satisfiability for vertex cover is not in NP/poly, then the polynomial-time hierarchy collapses to its third level, which implies that the hierarchy collapses.
Abstract: Consider the following two-player communication process to decide a language $L$: The first player holds the entire input $x$ but is polynomially bounded; the second player is computationally unbounded but does not know any part of $x$; their goal is to cooperatively decide whether $x$ belongs to $L$ at small cost, where the cost measure is the number of bits of communication from the first player to the second player. For any integer $d geq 3$ and positive real $epsilon$ we show that if satisfiability for $n$-variable $d$-CNF formulas has a protocol of cost $O(n^{d-epsilon})$ then coNP is in NP/poly, which implies that the polynomial-time hierarchy collapses to its third level. The result even holds when the first player is conondeterministic, and is tight as there exists a trivial protocol for $epsilon = 0$. Under the hypothesis that coNP is not in NP/poly, our result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. By reduction, similar results hold for other NP-complete problems. For the vertex cover problem on $n$-vertex $d$-uniform hypergraphs, the above statement holds for any integer $d geq 2$. The case $d=2$ implies that no NP-hard vertex deletion problem based on a graph property that is inherited by subgraphs can have kernels consisting of $O(k^{2-epsilon})$ edges unless coNP is in NP/poly, where $k$ denotes the size of the deletion set. Kernels consisting of $O(k^2)$ edges are known for several problems in the class, including vertex cover, feedback vertex set, and bounded-degree deletion.

Proceedings Article
01 Jan 2010
TL;DR: Modifications to instantiation based SMT-solvers and to McMillan's interpolation algorithm in order to compute quantified interpolants are presented.
Abstract: Interpolation has proven highly effective in program analysis and verification, e. g., to derive invariants or new abstractions. While interpolation for quantifier free formulae is understood quite well, it turns out to be challenging in the presence of quantifiers. We present in this talk modifications to instantiation based SMT-solvers and to McMillan's interpolation algorithm in order to compute quantified interpolants.

Book ChapterDOI
01 Jan 2010
TL;DR: Some results on the number of positive solutions are proved and a careful convergence analysis of Newton’s iteration is carried out in the cases of interest where some singularity conditions are encountered.
Abstract: We survey theoretical properties and algorithms concerning the problem of solving a nonsymmetric algebraic Riccati equation, and we report on some known methods and new algorithmic advances. In particular, some results on the number of positive solutions are proved and a careful convergence analysis of Newton’s iteration is carried out in the cases of interest where some singularity conditions are encountered. From this analysis we determine initial approximations which still guarantee the quadratic convergence.

Proceedings Article
01 Jan 2010
TL;DR: This work disproves the common misconception that step-indexing is not inapplicable to liveness problems and develops the first Hoare logic of total correctness for a language with function pointers and semantic assertions.
Abstract: Step-indexed models provide approximations to a class of domain equations and can prove type safety, partial correctness, and program equivalence; however, a common misconception is that they are inapplicable to liveness problems. We disprove this by applying step-indexing to develop the first Hoare logic of total correctness for a language with function pointers and semantic assertions. In fact, from a liveness perspective, our logic is stronger: we verify explicit time resource bounds. We apply our logic to examples containing nontrivial "higher-order" uses of function pointers and we prove soundness with respect to a standard operational semantics. Our core technique is very compact and may be applicable to other liveness problems. Our results are machine checked in Coq.

Proceedings Article
01 Jan 2010
TL;DR: The current trend of deploying MBT in the industry, particularly in the TCoE - Test Center of Excellence - managed by the big System Integrators, is addressed, as a vector for software testing "industrialization".
Abstract: The idea of model-based testing is to use an explicit abstract model of a SUT and its environment to automatically derive tests for the SUT: the behavior of the model of the SUT is interpreted as the intended behavior of the SUT. The technology of automated model-based test case generation has matured to the point where large-scale deployments of this technology are becoming commonplace. The prerequisites for success, such as qualification of the test team, integrated tool chain availability and methods, are now identified, and a wide range of commercial and open-source tools are available. Although MBT will not solve all testing problems, it is an important and useful technique, which brings significant progress over the state of the practice for functional software testing effectiveness, and can increase productivity and improve functional coverage. In this talk, we'll adress the current trend of deploying MBT in the industry, particularly in the TCoE - Test Center of Excellence - managed by the big System Integrators, as a vector for software testing "industrialization".

Proceedings Article
01 Jan 2010
TL;DR: In this article, a combinatorial construction of a choice rule that is monotonic, pairwise non-manipulable, and onto the set of alternatives, for any number of alternatives besides three, is presented.
Abstract: A tournament is a binary dominance relation on a set of alternatives. Tournaments arise in many contexts that are relevant to AI, most notably in voting (as a method to aggregate the preferences of agents). There are many works that deal with choice rules that select a desirable alternative from a tournament, but very few of them deal directly with incentive issues, despite the fact that game-theoretic considerations are crucial with respect to systems populated by selfish agents. We deal with the problem of the manipulation of choice rules by considering two types of manipulation. We say that a choice rule is emph{monotonic} if an alternative cannot get itself selected by losing on purpose, and emph{pairwise nonmanipulable} if a pair of alternatives cannot make one of them the winner by reversing the outcome of the match between them. Our main result is a combinatorial construction of a choice rule that is monotonic, pairwise nonmanipulable, and onto the set of alternatives, for any number of alternatives besides three.

Proceedings Article
01 Jan 2010
TL;DR: In this paper, the authors propose several alternatives for the creation of flexible web services, which can be invoked from different types of devices, and compare the different proposed approaches. But they do not consider the need for them to be adaptable when being invoked from a mobile device.
Abstract: Mobile devices have become an essential element in our daily lives, even for connecting to the Internet. Web Services have become extremely important when offering services through the Internet. However, current Web Services are very inflexible as regards their invocation from different types of device, especially if we consider the need for them to be adaptable when being invoked from a mobile device. In this paper, we will propose several alternatives for the creation of flexible web services which can be invoked from different types of device, and compare the different proposed approaches. Aspect -Oriented Programming and Model-Driven Development have been used in all proposals to reduce the impact of service adaption, not only for the service developer, but also to maintain the correct code structure. This work has been developed thanks to the support of MEC (contract TIN2008-02985).

Proceedings Article
01 Jan 2010
TL;DR: In this paper, the authors investigated the broadcast capacity and latency of a mobile network subject to the condition that the stationary node spatial distribution generated by the mobility model is uniform, and presented a broadcast scheme, called RippleCast, that simultaneously achieves asymptotically optimal broadcast capacity, subject to a weak upper bound on the maximum node velocity.
Abstract: In this talk, we investigate the fundamental properties of broadcasting in mobile wireless networks. In particular, we characterize broadcast capacity and latency of a mobile network, subject to the condition that the stationary node spatial distribution generated by the mobility model is uniform. We first study the intrinsic properties of broadcasting, and present a broadcasting scheme, called RippleCast, that simultaneously achieves asymptotically optimal broadcast capacity and latency, subject to a weak upper bound on the maximum node velocity. This study intendedly ignores the burden related to the selection of broadcast relay nodes within the mobile network, and shows that optimal broadcasting in mobile networks is, in principle, possible. We then investigate the broadcasting problem when the relay selection burden is taken into account, and present a combined distributed leader election and broadcasting scheme achieving a broadcast capacity and latency which is within a $Theta((log n)^{1+frac{2}{alpha}})$ factor from optimal, where $n$ is the number of mobile nodes and $alpha>2$ is the path loss exponent. However, this result holds only under the assumption that the upper bound on node velocity converges to zero (although with a very slow, poly-logarithmic rate) as $n$ grows to infinity. To the best of our knowledge, our is the first paper investigating the effects of node mobility on the fundamental properties of broadcasting, and showing that, while optimal broadcasting in a mobile network is in principle possible, the coordination efforts related to the selection of broadcast relay nodes lead to sub-optimal broadcasting performance.

Proceedings Article
01 Jan 2010
TL;DR: The goal of this work is to develop an SMT-solver for the real algebra, which is both complete and efficient for real algebra.
Abstract: There are several methods for the synthesis and analysis of hybrid systems that require efficient algorithms and tools for satisfiability checking. For analysis, e.g., bounded model checking describes counterexamples of a fixed length by logical formulas, whose satisfiability corresponds to the existence of such a counterexample. As an example for parameter synthesis, we can state the correctness of a parameterized system by a logical formula; the solution set of the formula gives us possible safe instances of the parameters. For discrete systems, which can be described by propositional logic formulas, SAT-solvers can be used for the satisfiability checks. For hybrid systems, having mixed discrete-continuous behavior, SMT-solvers are needed. SMT-solving extends SAT with theories, and has its main focus on linear arithmetic, which is sufficient to handle, e.g., linear hybrid systems. However, there are only few solvers for more expressive but still decidable logics like the first-order theory of the reals with addition and multiplication -- real algebra. Since the synthesis and analysis of non-linear hybrid systems requires such a powerful logic, we need efficient SMT-solvers for real algebra. Our goal is to develop such an SMT-solver for the real algebra, which is both complete and efficient.

Proceedings Article
01 Jan 2010
TL;DR: JADL++ combines the ease of use of scripting-languages with a state-of-the-art service oriented approach which allows the seamless integration of web-services.
Abstract: This paper introduces a programming language for service-oriented agents. JADL++ combines the ease of use of scripting-languages with a state-of-the-art service oriented approach which allows the seamless integration of web-services. Furthermore, the language includes OWL-based ontologies for semantic descriptions of data and services, thus allowing agents to make intelligent decisions about service calls.