scispace - formally typeset
Search or ask a question

Showing papers presented at "Dagstuhl Seminar Proceedings in 2006"


Proceedings Article
01 Jan 2006
TL;DR: The BLOG model as discussed by the authors is a formal language for defining probability models with unknown objects and identity uncertainty, and it can be used to describe a generative process in which some steps add objects to the world, and others determine attributes and relations on these objects.
Abstract: We introduce BLOG, a formal language for defining probability models with unknown objects and identity uncertainty. A BLOG model describes a generative process in which some steps add objects to the world, and others determine attributes and relations on these objects. Subject to certain acyclicity constraints, a BLOG model specifies a unique probability distribution over first-order model structures that can contain varying and unbounded numbers of objects. Furthermore, inference algorithms exist for a large class of BLOG models.

427 citations


Proceedings Article
01 Jan 2006
TL;DR: DecSerFlow is proposed as a Declarative Service Flow Language that can be used to specify, enact, and monitor service flows, and is extendible (i.e., constructs can be added without changing the engine or semantical basis) and used to enforce or to check the conformance of service flows.
Abstract: The need for process support in the context of web services has triggered the development of many languages, systems, and standards. Industry has been developing software solutions and proposing standards such as BPEL, while researchers have been advocating the use of formal methods such as Petri nets and pi-calculus. The languages developed for service flows, i.e., process specification languages for web services, have adopted many concepts from classical workflow management systems. As a result, these languages are rather procedural and this does not fit well with the autonomous nature of services. Therefore, we propose DecSerFlow as a Declarative Service Flow Language. DecSerFlow can be used to specify, enact, and monitor service flows. The language is extendible (i.e., constructs can be added without changing the engine or semantical basis) and can be used to enforce or to check the conformance of service flows. Although the language has an appealing graphical representation, it is grounded in temporal logic.

346 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper showed that the Euclidean Traveling Salesman Problem lies in the counting hierarchy of NP-hard problems, and that the best upper bound for this problem in terms of classical complexity classes is PSPACE.
Abstract: We study two quite different approaches to understanding the complexity of fundamental problems in numerical analysis. We show that both hinge on the question of understanding the complexity of the following problem, which we call PosSlp: Given a division-free straight-line program producing an integer N, decide whether N>0. We show that OrdSlp lies in the counting hierarchy, and combining our results with work of Tiwari, we show that the Euclidean Traveling Salesman Problem lies in the counting hierarchy -- the previous best upper bound for this important problem (in terms of classical complexity classes) being PSPACE.

136 citations


Proceedings Article
01 Jan 2006
TL;DR: In this article, a model transformation is described in a precise way, and the model transformation can be analyzed later on. But the model transformations can be expressed in a rule-based manner.
Abstract: Nowadays the usage of model transformations in software engineering has become widespread. Considering current trends in software development such as model driven development (MDD), there is an emerging need to develop model manipulations such as model evolution and optimisation, semantics definition, etc. If a model transformation is described in a precise way, it can be analysed lateron. Models, especially visual models, can be described best by graphs, due to their multi-dimensional extension. Graphs can be manipulated by graph transformation in a rule-based manner. Thus, we specify model transformation by graph transformation. This approach offers visual and formal techniques in such a way that model transformations can be subjects to analysis. Various results on graph transformation can be used to prove important properties of model transformations such as its functional behaviour, a basic property for computations. Moreover, certain kinds of syntactical and semantical consistency properties can be shown on this formal basis.

119 citations


Proceedings Article
01 Jan 2006
TL;DR: It is shown that quantum network coding is possible if approximation is allowed, by using a simple network model called Butterfly, and several impossibility results including the general upper bound of the fidelity are given.
Abstract: Since quantum information is continuous, its handling is sometimes surprisingly harder than the classical counterpart. A typical example is cloning; making a copy of digital information is straightforward but it is not possible exactly for quantum information. The question in this paper is whether or not {em quantum} network coding is possible. Its classical counterpart is another good example to show that digital information flow can be done much more efficiently than conventional (say, liquid) flow. Our answer to the question is similar to the case of cloning, namely, it is shown that quantum network coding is possible if approximation is allowed, by using a simple network model called Butterfly. In this network, there are two flow paths, $s_1$ to $t_1$ and $s_2$ to $t_2$, which shares a single bottleneck channel of capacity one. In the classical case, we can send two bits simultaneously, one for each path, in spite of the bottleneck. Our results for quantum network coding include: (i) We can send any quantum state $|psi_1 angle$ from $s_1$ to $t_1$ and $|psi_2 angle$ from $s_2$ to $t_2$ simultaneously with a fidelity strictly greater than $1/2$. (ii) If one of $|psi_1 angle$ and $|psi_2 angle$ is classical, then the fidelity can be improved to $2/3$. (iii) Similar improvement is also possible if $|psi_1 angle$ and $|psi_2 angle$ are restricted to only a finite number of (previously known) states. (iv) Several impossibility results including the general upper bound of the fidelity are also given.

116 citations


Proceedings Article
01 Jan 2006
TL;DR: This article proposed a latent group model (LGM) for relational data, which discovers and exploits the hidden structures responsible for the observed autocorrelation among class labels, and showed that LGM outperforms models that ignore latent group structure when there is little known information with which to seed inference.
Abstract: The presence of autocorrelation provides strong motivation for using relational techniques for learning and inference. Autocorrelation is a statistical dependency between the values of the same variable on related entities and is a nearly ubiquitous characteristic of relational data sets. Recent research has explored the use of collective inference techniques to exploit this phenomenon. These techniques achieve significant performance gains by modeling observed correlations among class labels of related instances, but the models fail to capture a frequent cause of autocorrelation---the presence of underlying groups that influence the attributes on a set of entities. We propose a latent group model (LGM) for relational data, which discovers and exploits the hidden structures responsible for the observed autocorrelation among class labels. Modeling the latent group structure improves model performance, increases inference efficiency, and enhances our understanding of the datasets. We evaluate performance on three relational classification tasks and show that LGM outperforms models that ignore latent group structure when there is little known information with which to seed inference.

91 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper presented a new theory of relative semantics between objects, based on information distance and Kolmogorov complexity, which is then applied to construct a method to automatically extract the meaning of words and phrases from the world-wide-web using Google page counts.
Abstract: We present a new theory of relative semantics between objects, based on information distance and Kolmogorov complexity. This theory is then applied to construct a method to automatically extract the meaning of words and phrases from the world-wide-web using Google page counts. The approach is novel in its unrestricted problem domain, simplicity of implementation, and manifestly ontological underpinnings. The world-wide-web is the largest database on earth, and the latent semantic context information entered by millions of independent users averages out to provide automatic meaning of useful quality. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies, and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87% with the expert crafted WordNet categories.

87 citations


Proceedings Article
01 Jan 2006
TL;DR: In this article, it was shown that flatness is a necessary and sufficient condition for termination of accelerated symbolic model checking, a generic semi-algorithmic technique implemented in successful tools like FAST, LASH or TReX.
Abstract: This paper argues that flatness appears as a central notion in the verification of counter automata. A counter automaton is called flat when its control graph can be ``replaced'', equivalently w.r.t. reachability, by another one with no nested loops. From a practical view point, we show that flatness is a necessary and sufficient condition for termination of accelerated symbolic model checking, a generic semi-algorithmic technique implemented in successful tools like FAST, LASH or TReX. From a theoretical view point, we prove that many known semilinear subclasses of counter automata are flat: reversal bounded counter machines, lossy vector addition systems with states, reversible Petri nets, persistent and conflict-free Petri nets, etc. Hence, for these subclasses, the semilinear reachability set can be computed using a emph{uniform} accelerated symbolic procedure (whereas previous algorithms were specifically designed for each subclass).

78 citations


Proceedings Article
01 Jan 2006
TL;DR: This paper gives an overview of shape dissimilarity measure properties, such as metric and robustness properties, and of retrieval performance measures, and that of shape similarity measures.
Abstract: This paper gives an overview of shape dissimilarity measure properties, such as metric and robustness properties, and of retrieval performance measures. Fifteen shape similarity measures are shortly described and compared. Since an objective comparison of their qualities seems to be impossible, experimental comparison is needed. The Motion Picture Expert Group (MPEG), a working group of ISO/IEC has defined the MPGE-7 standard for description and search of audio and visual content. A region based and a contour based shape similarity method are part of the standard. The data set created by the MPEG-7 committee for evaluation of shape similarity measures offers an excellent possibility for objective experimental comparison of the existing approaches evaluated based on the retrieval rate. Their retrieval results on the MPEG-7 Core Experiment Core Experiment Shape-1 test set as reported in the literature and obtained by a reimplementation are compared and discussed. To compare the performance of similarity measures, we built the framework SIDESTEP -- Shape-based Image Delivery Statistics Evaluation Project, http://give-lab.cs.uu.nl/sidestep/.

76 citations


Proceedings Article
01 Jan 2006
TL;DR: It is shown that a variant of the Descartes algorithm can cope with bit-stream coefficients, which can be approximated to any desired accuracy, but are not exactly known exactly.
Abstract: The Descartes method is an algorithm for isolating the real roots of square-free polynomials with real coefficients. We assume that coefficients are given as (potentially infinite) bit-streams. In other words, coefficients can be approximated to any desired accuracy, but are not known exactly. We show that a variant of the Descartes algorithm can cope with bit-stream coefficients. To isolate the real roots of a square-free real polynomial $q(x) = q_nx^n+ldots+q_0$ with root separation $ ho$, coefficients $abs{q_n}ge1$ and $abs{q_i} le 2^ au$, it needs coefficient approximations to $O(n(log(1/ ho) + au))$ bits after the binary point and has an expected cost of $O(n^4 (log(1/ ho) + au)^2)$ bit operations.

74 citations


Proceedings Article
01 Jan 2006
TL;DR: Flyspeck as discussed by the authors is a formal verification of the Kepler Conjecture, which states that the density of a packing of equal radius balls in three dimensions cannot exceed π/sqrt{18}.
Abstract: This article gives an introduction to a long-term project called Flyspeck, whose purpose is to give a formal verification of the Kepler Conjecture. The Kepler Conjecture asserts that the density of a packing of equal radius balls in three dimensions cannot exceed $pi/sqrt{18}$. The original proof of the Kepler Conjecture, from 1998, relies extensively on computer calculations. Because the proof relies on relatively few external results, it is a natural choice for a formalization effort.

Proceedings Article
01 Jan 2006
TL;DR: This talk introduces asynchronous dynamic pushdown networks (ADPN), a new model for multithreaded programs in which pushdown systems communicate via shared memory, and provides efficient algorithms for both forward and backward reachability analysis.
Abstract: We introduce asynchronous dynamic pushdown networks (ADPN), a new model for multithreaded programs in which pushdown systems communicate via shared memory. ADPN generalizes both CPS (concurrent pushdown systems) and DPN (dynamic pushdown networks). We show that ADPN exhibit several advantages as a program model. Since the reachability problem for ADPN is undecidable even in the case without dynamic creation of processes, we address the bounded reachability problem, which considers only those computation sequences where the (index of the) thread accessing the shared memory is changed at most a fixed given number of times. We provide efficient algorithms for both forward and backward reachability analysis. The algorithms are based on automata techniques for symbolic representation of sets of configurations. This talk is based on joint work with Ahmed Bouajjani, Javier Esparza, and Jan Strejcek that appeared in FSTTCS 2005.

Proceedings Article
01 Jan 2006
TL;DR: In this article, a lower bound on the redundancy-query time tradeoff was shown for the case of succinct representations, where n = n + r$ for some redundancy in the form π(n/log n).
Abstract: In the cell probe model with word size 1 (the bit probe model), a static data structure problem is given by a map $f: {0,1}^n imes {0,1}^m ightarrow {0,1}$, where ${0,1}^n$ is a set of possible data to be stored, ${0,1}^m$ is a set of possible queries (for natural problems, we have $m ll n$) and $f(x,y)$ is the answer to question $y$ about data $x$. A solution is given by a representation $phi: {0,1}^n ightarrow {0,1}^s$ and a query algorithm $q$ so that $q(phi(x), y) = f(x,y)$. The time $t$ of the query algorithm is the number of bits it reads in $phi(x)$. In this paper, we consider the case of {em succinct} representations where $s = n + r$ for some {em redundancy} $r ll n$. For a boolean version of the problem of polynomial evaluation with preprocessing of coefficients, we show a lower bound on the redundancy-query time tradeoff of the form [ (r+1) t geq Omega(n/log n).] In particular, for very small redundancies $r$, we get an almost optimal lower bound stating that the query algorithm has to inspect almost the entire data structure (up to a logarithmic factor). We show similar lower bounds for problems satisfying a certain combinatorial property of a coding theoretic flavor. Previously, no $omega(m)$ lower bounds were known on $t$ in the general model for explicit functions, even for very small redundancies. By restricting our attention to {em systematic} or {em index} structures $phi$ satisfying $phi(x) = x cdot phi^*(x)$ for some map $phi^*$ (where $cdot$ denotes concatenation) we show similar lower bounds on the redundancy-query time tradeoff for the natural data structuring problems of Prefix Sum and Substring Search.

Proceedings Article
01 Jan 2006
TL;DR: This document presents a Services Research Roadmap that launches four pivotal, inherently related, research themes to Service-Oriented Computing: service foundations, service composition, service management and monitoring and service-oriented engineering.
Abstract: This document presents a Services Research Roadmap that launches four pivotal, inherently related, research themes to Service-Oriented Computing (SOC): service foundations, service composition, service management and monitoring and service-oriented engineering Each theme is introduced briefly from a technology, state of the art and scientific challenges standpoint From the technology standpoint a comprehensive review of state of the art, standards, and current research activities in each key area is provided From the state of the art the major open problems and bottlenecks to progress are identified During the during seminar each core theme was initially introduced by a leading expert in the field who described the state of the art and highlighting open problems and important research topics for the SOC community to work on in the future These experts were then asked to coordinate parallel workgroups that were entrusted with an in-depth analysis of the research opportunities and needs in the respective theme The findings presented in this summary report build on the advice of those panels of experts from industry and academia who participated in this Dagstuhl Seminar and met at other occasions during the past three years, eg, at the International Conference on Service Oriented Computing (ICSOC, see wwwicsocorg) These experts represent many disciplines including distributed computing, database and information systems, software engineering, computer architectures and middleware and knowledge representation

Proceedings Article
01 Jan 2006
TL;DR: Binary representations of both lambda calculus and combinatory logic terms are introduced, and their simplicity is demonstrated by providing very compact parser-interpreters for these binary languages.
Abstract: We introduce binary representations of both lambda calculus and combinatory logic terms, and demonstrate their simplicity by providing very compact parser-interpreters for these binary languages. We demonstrate their application to Algorithmic Information Theory with several concrete upper bounds on program-size complexity, including an elegant self-delimiting code for binary strings.

Proceedings Article
14 Oct 2006
TL;DR: In this paper, the authors propose an abstract model for a general anonymity system which is consistent with the definition of anonymity on which the metrics are based, and apply them to Crowds, a practical and efficient anonymity system.
Abstract: In 2001, two information theoretic anonymity metrics were proposed: the "effective anonymity set size" and the "degree of anonymity". In this talk, we propose an abstract model for a general anonymity system which is consistent with the definition of anonymity on which the metrics are based. We revisit entropy-based anonymity metrics, and we apply them to Crowds, a practical anonymity system. We discuss the differences between the two metrics and the results obtained in the example.

Proceedings Article
01 Jan 2006
TL;DR: In this article, the Compensated Horner Scheme (CHS) is proposed to evaluate polynomials in floating-point arithmetic with twice the working precision of HS.
Abstract: Using error-free transformations, we improve the classic Horner Scheme (HS) to evaluate (univariate) polynomials in floating point arithmetic. We prove that this Compensated Horner Scheme (CHS) is as accurate as HS performed with twice the working precision. Theoretical analysis and experiments exhibit a reasonable running time overhead being also more interesting than double-double implementations. We introduce a dynamic and validated error bound of the CHS computed value. The talk presents these results together with a survey about error-free transformations and related hypothesis.

Proceedings Article
01 Jan 2006
TL;DR: In this article, Chen et al. studied the complexity of constrained constraint satisfaction problems with constraint relations and provided a systematic study of CSP complexity with and without constraint relations, where constraint relations are explicitly specified.
Abstract: The general intractability of the constraint satisfaction problem has motivated the study of the complexity of restricted cases of this problem. Thus far, the literature has primarily considered the formulation of the CSP where constraint relations are given explicitly. We initiate the systematic study of CSP complexity with succinctly specified constraint relations. This is joint work with Hubie Chen.

Proceedings Article
01 Jan 2006
TL;DR: This work shows how to transform the problem of computing solutions to a classical Hermite Pade approximation problem for an input vector of dimension m imes 1 and that of computing a minimal approximant basis for a matrix of dimension O(m) imes O( m)$, uniform degree constraint $Theta(N/m), and order.
Abstract: We show how to transform the problem of computing solutions to a classical Hermite Pade approximation problem for an input vector of dimension $m imes 1$, arbitrary degree constraints $(n_1,n_2,ldots,n_m)$, and order $N := (n_1 + 1) + cdots + (n_m + 1) - 1$, to that of computing a minimal approximant basis for a matrix of dimension $O(m) imes O(m)$, uniform degree constraint $Theta(N/m)$, and order $Theta(N/m)$.

Proceedings Article
01 Jan 2006
TL;DR: This paper investigates the complexity concept to avoid a vague use of the term `complexity' in workflow designs and presents several complexity metrics that have been used for a number of years in adjacent fields of science and explain how they can be adapted and use to evaluate the complexity of workflows.
Abstract: During the last 20 years, complexity has been an interesting topic that has been investigated in many fields of science, such as biology, neurology, software engineering, chemistry, psychology, and economy. A survey of the various approaches to understand complexity has lead sometimes to a measurable quantity with a rigorous but narrow definition and other times as merely an ad hoc label. In this paper we investigate the complexity concept to avoid a vague use of the term `complexity' in workflow designs. We present several complexity metrics that have been used for a number of years in adjacent fields of science and explain how they can be adapted and use to evaluate the complexity of workflows.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: In this paper, the similarity between relational instances using background knowledge expressed in first-order logic is measured using a kernel over pairs of proof trees, which can be used for classification as well as regression.
Abstract: We develop kernels for measuring the similarity between relational instances using background knowledge expressed in first-order logic. The method allows us to bridge the gap between traditional inductive logic programming (ILP) representations and statistical approaches to supervised learning. Logic programs are first used to generate proofs of given visitor programs that use predicates declared in the available background knowledge. A kernel is then defined over pairs of proof trees. The method can be used for supervised learning tasks and is suitable for classification as well as regression. We report positive empirical results on Bongard-like and M-of-N problems that are difficult or impossible to solve with traditional ILP techniques, as well as on real bioinformatics and chemoinformatics data sets.

Proceedings Article
01 Jan 2006
TL;DR: An approach for automatically generating loop invariants using quantifier-elimination as well as Presburger arithmetic to generate the strongest possible invariant of the hypothesized form can be generated from most general solutions of the parametric constraints.
Abstract: An approach for automatically generating loop invariants using quantifier-elimination is proposed. An invariant of a loop is hypothesized as a parameterized formula. Parameters in the invariant are discovered by generating constraints on the parameters by ensuring that the formula is indeed preserved by the execution path corresponding to every basic cycle of the loop. The parameterized formula can be successively refined by considering execution paths one by one; heuristics can be developed for determining the order in which the paths are considered. Initialization of program variables as well as the precondition and postcondition of the loop, if available, can also be used to further refine the hypothesized invariant. Constraints on parameters generated in this way are solved for possible values of parameters. If no solution is possible, this means that an invariant of the hypothesized form does not exist for the loop. Otherwise, if the parametric constraints are solvable, then under certain conditions on methods for generating these constraints, the strongest possible invariant of the hypothesized form can be generated from most general solutions of the parametric constraints. The approach is illustrated using the first-order theory of polynomial equations as well as Presburger arithmetic.

Proceedings Article
01 Jan 2006
TL;DR: The PRISM symbolic-statistical modeling language as discussed by the authors has been developed since 1997, and recently incorporated a program transformation technique to handle failure in generative modeling, including learning from negative observations, constrained HMMs and finite PCFGs.
Abstract: PRISM, a symbolic-statistical modeling language we have been developing since '97, recently incorporated a program transformation technique to handle failure in generative modeling. I'll show this feature opens a way to new breeds of symbolic models, including EM learning from negative observations, constrained HMMs and finite PCFGs.

Proceedings Article
01 Jan 2006
TL;DR: The relationship between the $r-th order nonlinearity and a recent cryptographic criterion called the algebraic immunity strengthens the reasons why thegebraic immunity can be considered as a further cryptographic complexity criterion.
Abstract: Cryptographic Boolean functions must be complex to satisfy Shannon's principle of confusion. But the cryptographic viewpoint on complexity is not the same as in circuit complexity. The two main criteria evaluating the cryptographic complexity of Boolean functions on $F_2^n$ are the nonlinearity (and more generally the $r$-th order nonlinearity, for every positive $r< n$) and the algebraic degree. Two other criteria have also been considered: the algebraic thickness and the non-normality. After recalling the definitions of these criteria and why, asymptotically, almost all Boolean functions are deeply non-normal and have high algebraic degrees, high ($r$-th order) nonlinearities and high algebraic thicknesses, we study the relationship between the $r$-th order nonlinearity and a recent cryptographic criterion called the algebraic immunity. This relationship strengthens the reasons why the algebraic immunity can be considered as a further cryptographic complexity criterion.

Proceedings Article
01 Jan 2006
TL;DR: In this paper, a parametrised functional interpretation of the Dialectica interpretation is presented, where the choice of the counter-examples for A becomes witnesses for the negation of A, and information about the witnesses of A is interested in.
Abstract: The purpose of this article is to present a parametrised functional interpretation. Depending on the choice of the two parameters one obtains well-known functional interpretations, among others GAƒÂ¶del's Dialectica interpretation, Diller-Nahm's variant of the Dialectica interpretation, Kreisel's modified realizability, Kohlenbach's monotone interpretations and Stein's family of functional interpretations. We show that all these interpretations only differ on two basic choices, which are captured by the parameters, namely the choices of (1) "how much" of the counter-examples for A becomes witnesses for the negation of A, and of (2) "how much" information about the witnesses of A one is interested in.

Proceedings Article
01 Jan 2006
TL;DR: The point-set pattern discovery algorithms described here can be adapted for data compression, and the ecient encodings generated when this compression algorithm is run on music data seem to resemble the motivic-thematic analyses produced by human experts.
Abstract: An algorithm that discovers the themes, motives and other perceptually significant repeated patterns in a musical work can be used, for example, in a music information retrieval system for indexing a collection of music documents so that it can be searched more rapidly. It can also be used in software tools for music analysis and composition and in a music transcription system or model of music cognition for discovering grouping structure, metrical structure and voice-leading structure. In most approaches to pattern discovery in music, the data is assumed to be in the form of strings. However, string-based methods become inefficient when one is interested in finding highly embellished occurrences of a query pattern or searching for polyphonic patterns in polyphonic music. These limitations can be avoided by representing the music as a set of points in a multidimensional Euclidean space. This point-set pattern matching approach allows the maximal repeated patterns in a passage of polyphonic music to be discovered in quadratic time and all occurrences of these patterns to be found in cubic time. More recently, Clifford et al. (2006) have shown that the best match for a query point set within a text point set of size n can be found in O(n log n) time by incorporating randomised projection, uniform hashing and FFT into the point-set pattern matching approach. Also, by using appropriate heuristics for selecting compact maximal repeated patterns with many non-overlapping occurrences, the point-set pattern discovery algorithms described here can be adapted for data compression. Moreover, the efficient encodings generated when this compression algorithm is run on music data seem to resemble the motivic-thematic analyses produced by human experts.

Proceedings Article
01 Jan 2006
TL;DR: In this article, the use of event logs of web services and behavioral service descriptions as input for process mining and conformance checking is discussed, which is the act of verifying whether or not one or more parties stick to an agreed-upon behavior, by observing their actual behavior as recorded in message logs.
Abstract: Recently, languages such as BPEL and WS-CDL have been proposed to describe interactions between services and their behavioral dependencies. The emergence of these languages heralds an era where richer service descriptions, going beyond WSDL-like interfaces, will be available. However, what can these richer service descriptions serve for? This talk discussed the use of event logs of web services and behavioral service descriptions as input for process mining and conformance checking. Conformance checking is the act of verifying whether or not one or more parties stick to an agreed-upon behavior, by observing their actual behavior as recorded in message logs. This talk shows that it is possible to translate BPEL business abstract processes to Petri nets and to relate SOAP messages to transitions in the Petri net. The approach has been implemented in the ProM framework.

Proceedings Article
01 Jan 2006
TL;DR: A prototype of a formalism-independent workflow engine based on AMFIBIA is implemented, which is open for new aspects of busi- ness process modelling and new modelling formalisms can be added to it.
Abstract: AMFIBIA is a meta-model that formalizes the essential as- pects and concepts of business process modelling. Though AMFIBIA is not the first approach to formalizing the aspects and concepts of busi- ness process modelling, it is more ambitious in the following respects: First, it is independent from particular modelling formalisms of busi- ness processes and it is designed in such a way that any formalisms for modelling some aspect of a business process can be plugged into AM- FIBIA. Therefore, AMFIBIA is formalism-independent. Second it is not biased toward any aspect of business process and the dierent aspects can be, basically, considered and modelled independently of each other. Moreover, it is not restricted to a fixed set of aspects; further aspects of business processes can be easily integrated. Third, AMFIBIA does not only name and relate the concepts of business process modelling, as it is typically done in ontologies or architectures for business process modelling. Rather, AMFIBIA also captures the interaction among the dierent aspects and concepts, and therefore fully defines the dynamic behaviour of a business process model, with its dierent aspects modelled in dierent notations. To prove this claim, we implemented a prototype of a formalism-independent workflow engine based on AMFIBIA: This workflow engine, also called AMFIBIA, is open for new aspects of busi- ness process modelling and new modelling formalisms can be added to it. In this paper, we will present AMFIBIA and the prototype workflow engine based on this meta-model and discuss the principles and concepts of its design.

Proceedings Article
01 Jan 2006
TL;DR: In this paper, selected bio-inspired technologies and their applicability for sensor/actuator networks are discussed, including artificial immune system, swarm intelligence, and intercellular information exchange.
Abstract: The communication between networked embedded systems has become a major research domain in the communication networks area. Wireless sensor networks (WSN) and sensor/actuator networks (SANET) build of huge amounts of interacting nodes build the basis for this research. Issues such as mobility, network size, deployment density, and energy are the key factors for the development of new communication methodologies. Self-organization mechanisms promise to solve scalability problems A¢â‚¬â€œ unfortunately, by decreasing the determinism and the controllability of the overall system. Self-Organization was first studied in nature and its design principles such as feedback loops and the behavior on local information have been adapted to technical systems. Bio-inspired networking is the keyword in the communications domain. In this paper, selected bio-inspired technologies and their applicability for sensor/actuator networks are discussed. This includes for example the artificial immune system, swarm intelligence, and the intercellular information exchange.

Proceedings Article
01 Jan 2006
TL;DR: In this paper, the authors presented secure two-party protocols for various core problems in linear algebra, including the problem of computing the rank of an encrypted matrix and solving systems of linear equations.
Abstract: In this work we present secure two-party protocols for various core problems in linear algebra. Our main building block is a protocol to obliviously decide singularity of an encrypted matrix: Bob holds an $n imes n$ matrix $M$, encrypted with Alice's secret key, and wants to learn whether the matrix is singular or not (and nothing beyond that). We give an interactive protocol between Alice and Bob that solves the above problem with optimal communication complexity while at the same time achieving low round complexity. More precisely, the number of communication rounds in our protocol is $polylog(n)$ and the overall communication is roughly $O(n^2)$ (note that the input size is $n^2$). At the core of our protocol we exploit some nice mathematical properties of linearly recurrent sequences and their relation to the characteristic polynomial of the matrix $M$, following [Wiedemann, 1986]. With our new techniques we are able to improve the round complexity of the communication efficient solution of [Nissim and Weinreb, 2006] from $n^{0.275}$ to $polylog(n)$. Based on our singularity protocol we further extend our result to the problems of securely computing the rank of an encrypted matrix and solving systems of linear equations.