scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 1996"



Journal Article
TL;DR: Uppaal as mentioned in this paper is a tool suite for automatic verification of safety and bounded liveness properties of real-time systems modeled as networks of timed automata, which includes a graphical interface that supports graphical and textual representations of networks of automata and automatic transformation from graphical representations to textual format.
Abstract: Uppaal is a tool suite for automatic verification of safety and bounded liveness properties of real-time systems modeled as networks of timed automata. It includes: a graphical interface that supports graphical and textual representations of networks of timed automata, and automatic transformation from graphical representations to textual format, a compiler that transforms a certain class of linear hybrid systems to networks of timed automata, and a model-checker which is implemented based on constraint-solving techniques. Uppaal also supports diagnostic model-checking providing diagnostic information in case verification of a particular real-time systems fails.

810 citations


Journal Article
TL;DR: The Condensation algorithm as discussed by the authors combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time, achieving state-of-the-art performance.
Abstract: The problem of tracking curves in dense visual clutter is a challenging one. Trackers based on Kalman filters are of limited use; because they are based on Gaussian densities which are unimodal, they cannot represent simultaneous alternative hypotheses. Extensions to the Kalman filter to handle multiple data associations work satisfactorily in the simple case of point targets, but do not extend naturally to continuous curves. A new, stochastic algorithm is proposed here, the Condensation algorithm — Conditional Density Propagation over time. It uses ‘factored sampling’, a method previously applied to interpretation of static images, in which the distribution of possible interpretations is represented by a randomly generated set of representatives. The Condensation algorithm combines factored sampling with learned dynamical models to propagate an entire probability distribution for object position and shape, over time. The result is highly robust tracking of agile motion in clutter, markedly superior to what has previously been attainable from Kalman filtering. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time.

667 citations


Book ChapterDOI
TL;DR: This paper surveys the different algorithms, data-structures and tools that have been proposed in the literature to solve the so-called reachability problem.
Abstract: The theory of timed automata provides a formal framework to model and to verify the correct functioning of real-time systems. Among the different verification problems that have been investigated within this theory, the so-called reachability problem has been the most throughly studied. This problem is stated as follows. Given two states of the system, is there an execution starting at one of them that reaches the other? The first reason for studying such problem is that safety properties can expressed as the non-reachability of a set of states where the system is consider to show an incorrect or unsafe functioning. Second, the algorithms developed for analyzing other classes of properties are essentially based on the algorithms developed for solving the reachability question. In this paper we survey the different algorithms, data-structures and tools that have been proposed in the literature to solve this problem.

193 citations



Journal Article
TL;DR: This cipher combines highly non-linear substitution boxes and maximum distance separable error correcting codes (MDS-codes) to guarantee a good diffusion and is resistant against differential and linear cryptanalysis after a small number of rounds.

184 citations


Book ChapterDOI
TL;DR: It is shown that fault handling in Multi-Agent Systems is not much addressed in current research, and that this is not necessarily true, at least not with the assumptions on applications made.
Abstract: Fault handling in Multi-Agent Systems (MAS) is not much addressed in current research. Normally, it is considered difficult to address in detail and often well covered by traditional methods, relying on the underlying communication and operating system. In this paper it is shown that this is not necessarily true, at least not with the assumptions on applications we have made. These assumptions are a massive distribution of computing components, a heterogeneous underlying infrastructure (in terms of hardware, software and communication methods), an emerging configuration, possibly different parties in control of sub-systems, and real-time demands in parts of the system.

120 citations


Journal Article
TL;DR: This paper shows that carefully coded implementations of the MD4-family of hash functions are able to exploit the Pentium's superscalar architecture to its maximum effect: the performance with respect to execution on a non-parallel architecture increases by about 60%.
Abstract: With the advent of the Pentium processor parallelization finally bccarne available to Intel based computer systems. One of the design principles of the MD4-family of hash functions (MD4, MD5, SHA-1, FLIPEMD-160) is to be fast on the 32-bit Intel processors. This paper shows that carefully coded implementations of these hash functions are able to exploit the Pentium's superscalar architecture to its maximum effect: the performance with respect to execution on a non-parallel architecture increases by about 60%. This is an important result in view of the recent claims on the limited data bandwidth of these hash functions. Moreover, it is conjectured that these implementations are very close to optimal. It will also be shown that the performance penalty incurred by non-cached data and endianness conversion is limited, and in the order of 10% of running time.

81 citations



Journal Article
TL;DR: In this paper, the authors considered representations of graphs as rectanglevisibility graphs and as doubly linear graphs and proved that these graphs have at most 6n−20 edges for each n ≥ 8.
Abstract: This paper considers representations of graphs as rectanglevisibility graphs and as doubly linear graphs. These are, respectively, graphs whose vertices are isothetic rectangles in the plane with adjacency determined by horizontal and vertical visibility, and graphs that can be drawn as the union of two straight-edged planar graphs. We prove that these graphs have, with n vertices, at most 6n−20 (resp., 6n−18) edges, and we provide examples of these graphs with 6n−20 edges for each n≥8.

51 citations


Journal Article
TL;DR: In this paper, a generalised epipolar constraint is introduced, which applies to points, curves, as well as to apparent contours of surfaces, and the theory is developed for both continuous and discrete motion, known and unknown orientation, calibrated and uncalibrated, perspective, weak perspective and orthographic cameras.
Abstract: In this paper we will discuss structure and motion problems for curved surfaces These will be studied using the silhouettes or apparent contours in the images The problem of determining camera motion from the apparent contours of curved three-dimensional surfaces, is studied It will be shown how special points, called epipolar tangency points or frontier points, can be used to solve this problem A generalised epipolar constraint is introduced, which applies to points, curves, as well as to apparent contours of surfaces The theory is developed for both continuous and discrete motion, known and unknown orientation, calibrated and uncalibrated, perspective, weak perspective and orthographic cameras Results of an iterative scheme to recover the epipolar line structure from real image sequences using only the outlines of curved surfaces, is presented A statistical evaluation is performed to estimate the stability of the solution It is also shown how the motion of the camera from a sequence of images can be obtained from the relative motion between image pairs

Book ChapterDOI
TL;DR: How statecharts are used to model the behavior of complex reactive systems is described, according to the two main ways that have been suggested for modeling the structure of such systems: structured analysis and object-orientation.
Abstract: We describe how statecharts are used to model the behavior of complex reactive systems. The paper is divided into two parts, according to the two main ways that have been suggested for modeling the structure of such systems: structured analysis (SA) and object-orientation (OO). It uses a cardiac pacemaker example for illustration. (This paper available in hard-copy only.)


Journal Article
TL;DR: The construction of the ElGamal signature scheme shows that many discrete log based systems are insecure: they operate in more than one group at a time, and key material may leak through those groups in which discrete log is easy, but the DSA is not vulnerable.
Abstract: Simmons asked whether there exists a signature scheme with a broadband covert channel that does not require the sender to compromise the security of her signing key. We answer this question in the affirmative; the ElGamal signature scheme has such a channel. Thus, contrary to popular belief, the design of the DSA does not maximise the covert utility of its signatures, but minimises them. Our construction also shows that many discrete log based systems are insecure: they operate in more than one group at a time, and key material may leak through those groups in which discrete log is easy. However, the DSA is not vulnerable in this way.

Book ChapterDOI
TL;DR: It is shown how declarative diagnosis techniques can be extended to cope with verification of operational properties and of abstract properties, such as types and groundness dependencies, by using a simple semantic framework based on abstract interpretation.
Abstract: We show how declarative diagnosis techniques can be extended to cope with verification of operational properties, such as computed answers, and of abstract properties, such as types and groundness dependencies. The extension is achieved by using a simple semantic framework, based on abstract interpretation. The resulting technique (abstract diagnosis) leads to elegant bottom-up and top-down verification methods, which do not require to determine the symptoms in advance, and which are effective in the case of abstract properties described by finite domains.

Book ChapterDOI
TL;DR: Basic principles that underlie proof-based system engineering are introduced and an analysis of the Ariane 5 Flight 501 failure serves to illustrate how proof- based system engineering also helps in diagnosing causes of failures.
Abstract: We introduce basic principles that underlie proof-based system engineering, an engineering discipline aimed at computer-based systems. This discipline serves to avoid system engineering faults. It is based upon fulfilling proof obligations, notably establishing proofs that decisions regarding system design and system dimensioning are correct, before embarking on the implementation or the fielding of a computer-based system. We also introduce a proof-based system engineering method which has been applied to diverse projects involving embedded systems. These projects are presented and lessons learned are reported. An analysis of the Ariane 5 Flight 501 failure serves to illustrate how proof-based system engineering also helps in diagnosing causes of failures.


Book ChapterDOI
TL;DR: The clocked transition system (CTS) model is a development of the previous timed transition model, where some of the changes are inspired by the model of timed automata, and leads to a simpler style of temporal specification and verification, requiring no extension of the temporal language.
Abstract: This paper presents a new computational model for realtime systems, called the clocked transition system (CTS) model. The CTS model is a development of our previous timed transition model, where some of the changes are inspired by the model of timed automata. The new model leads to a simpler style of temporal specification and verification, requiring no extension of the temporal language. We present verification rules for proving safety properties (including time-bounded response properties) of clocked transition systems, and separate rules for proving (time-unbounded) response properties. All rules are associated with verification diagrams. The verification of response properties requires adjustments of the proof rules developed for untimed systems, reflecting the fact that progress in the real time systems is ensured by the progress of time and not by fairness. The style of the verification rules is very close to the verification style of untimed systems which allows the (re)use of verification methods and tools, developed for untimed reactive systems, for proving all interesting properties of real-time systems.

Book ChapterDOI
TL;DR: The main goal of the end-to-end evaluation procedure is to determine translation accuracy on a test set of previously unseen dialogues, which will help to verify the general coverage of knowledge sources, guide the development efforts, and to track the improvement over time.
Abstract: JANUS is a multi-lingual speech-to-speech translation system designed to facilitate communication between two parties engaged in a spontaneous conversation in a limited domain. In this paper we describe our methodology for evaluating translation performance. Our current focus is on end- to- end evaluations- the evaluation of the translation capabilities of the system as a whole. The main goal of our end-to-end evaluation procedure is to determine translation accuracy on a test set of previously unseen dialogues. Other goals include evaluating the effectiveness of the system in conveying domain-relevant information and in detecting and dealing appropriately with utterances (or portions of utterances) that are out-of-domain. End-to-end evaluations are performed in order to verify the general coverage of our knowledge sources, guide our development efforts, and to track our improvement over time. We discuss our evaluation procedures, the criteria used for assigning scores to translations produced by the system, and the tools developed for performing this task. Recent Spanish-to-English performance evaluation results are presented as an example.

Book ChapterDOI
TL;DR: The motivation for the work is to embody large applications with dialog capabilities and to do so the authors have adopted an object-oriented approach to dialog.
Abstract: If computer systems are to become the information-seeking, task-executing, problem-solving agents we want them to be then they must be able to communicate as effectively with humans as humans do with each other. It is thus desirable to develop computer systems that can also communicate with humans via spoken natural language dialog. The motivation for our work is to embody large applications with dialog capabilities. To do so we have adopted an object-oriented approach to dialog.

Journal Article
TL;DR: In this paper, a new attack on the LOKIDBH mode was presented, which breaks the last remaining subclass in a wide class of efficient hash functions which have been proposed in the literature.
Abstract: We consider constructions for cryptographic hash functions based on m-bit block ciphers First we present a new attack on the LOKIDBH mode: the attack finds collisions in 23m/4 encryptions, which should be compared to 2m encryptions for a brute force attack This attack breaks the last remaining subclass in a wide class of efficient hash functions which have been proposed in the literature We then analyze hash functions based on a collision resistant compression function for which finding a collision requires at least 2m encryptions, providing a lower bound of the complexity of collisions of the hash function A new class of constructions is proposed, based on error correcting codes over GF(22) and a proof of security is given, which relates their security to that of single block hash functions For example, a compression function is presented which requires about 4 encryptions to hash an m-bit block, and for which finding a collision requires at least 2m encryptions This scheme has the same hash rate as MDC-4, but better security against collision attacks Our method can be used to construct compression functions with even higher levels of security at the cost of more internal memory

Book ChapterDOI
TL;DR: JANUS as mentioned in this paper is a multi-lingual speech-to-speech translation system designed to facilitate communication between two parties engaged in a spontaneous conversation in a limited domain, which relies on a combination of acoustic, lexical, semantic and statistical knowledge sources.
Abstract: JANUS is a multi-lingual speech-to-speech translation system designed to facilitate communication between two parties engaged in a spontaneous conversation in a limited domain. In this paper we describe how multi-level segmentation of single utterance turns improves translation quality and facilitates accurate translation in our system. We define the basic dialogue units that are handled by our system, and discuss the cues and methods employed by the system in segmenting the input utterance into such units. Utterance segmentation in our system is performed in a multi-level incremental fashion, partly prior and partly during analysis by the parser. The segmentation relies on a combination of acoustic, lexical, semantic and statistical knowledge sources, which are described in detail in the paper. We also discuss how our system is designed to disambiguate among alternative possible input segmentations.

Journal Article
TL;DR: This paper shows how FO + linear can be strictly extended within FO + poly in a safe way, and shows the need for more expressive linear query languages if the authors want to sustain the desirability of the linear database model.
Abstract: It has been argued that the linear database model, in which semi-linear sets are the only geometric objects, is very suitable for most spatial database applications. For querying linear databases, the language FO + linear has been proposed. We present both negative and positive results regarding the expressiveness of FO+linear. First, we show that the dimension query is definable in FO + linear, which allows us to solve several interesting queries. Next, we show the non-definability of a whole class of queries that are related to sets not definable in FO+linear. This result both sharpens and generalizes earlier results independently found by Afrati et al. and the present authors, and demonstrates the need for more expressive linear query languages if we want to sustain the desirability of the linear database model. In this paper, we show how FO + linear can be strictly extended within FO + poly in a safe way. Whether any of the proposed extensions is complete for the linear queries definable in FO + poly remains open. We do show, however, that it is undecidable whether an expression in FO + poly induces a linear query.

Journal Article
TL;DR: This chapter surveys timed automata as a formalism for model checking real-time systems, as an extension of finite-state automata with real-valued variables for measuring time.
Abstract: Efficient automatic model-checking algorithms for real-time systems have been obtained in recent years based on the state-region graph technique of Alur, Courcoubetis and Dill. However, these algorithms are faced with two potential types of explosion arising from parallel composition: explosion in the space of control nodes, and explosion in the region space over clock-variables.

Journal Article
TL;DR: In this paper, a generic coordination architecture that supports relative and absolute, discrete time is presented for large, heterogeneous, distributed software systems and a major example of its use is a distributed auction.
Abstract: The notion of “time” plays an important role when coordinating large, heterogeneous, distributed software systems. We present a generic coordination architecture that supports relative and absolute, discrete time. First, we briefly sketch the ToolBus coordination architecture. Next, we give a major example of its use: a distributed auction. Finally, we present the theory underlying our notion of discrete time.

Journal Article
TL;DR: The field programmable gate arrays (FPGA) as discussed by the authors are a family of programmable circuits that can implement thousands of logic gates, and unlike traditional programmable logic devices (PLDs), a user can program an FPGA design as traditional PLDs in a few seconds.
Abstract: Field programmable gate arrays (FPGA) are a recently developed family of programmable circuits. Like mask programmable gate arrays (MPGA), FPGAs implement thousands of logic gates. But, unlike MPGAs, a user can program an FPGA design as traditional programmable logic devices (PLDs): in-site and a in a few seconds. These features, added to reprogrammability, have made FPGAs the dream tool for evolvable hardware. This paper is an introduction to FPGAs, presenting differencies with more traditional PLDs and giving a survey of two commercial architectures.

Book ChapterDOI
TL;DR: The authors describe a task-based evaluation methodology appropriate for dialogue systems such as the TRAINS-95 system, where a human and a computer interact and collaborate to solve a given problem.
Abstract: This paper describes a task-based evaluation methodology appropriate for dialogue systems such as the TRAINS-95 system, where a human and a computer interact and collaborate to solve a given problem. In task-based evaluations, techniques are measured in terms of their effect on task performance measures such as how long it takes to develop a solution using the system, and the quality of the final plan produced. We report recent experiment results which explore the effect of word recognition accuracy on task performance.

Book ChapterDOI
TL;DR: In this article, a new evaluation method that is system-to-system automatic dialogue with linguistic noise is presented, which simulates speech recognition errors in spoken dialogue systems to evaluate robustness of language understanding and dialogue management.
Abstract: The need for an evaluation method of spoken dialogue systems as a whole is more critical today than ever before However, previous evaluation methods are no longer adequate for evaluating interactive dialogue systems We have designed a new evaluation method that is system-to-system automatic dialogue with linguistic noise By linguistic noise we simulate speech recognition errors in Spoken Dialogue Systems Therefore, robustness of language understanding and of dialogue management can be evaluated We have implemented an evaluation environment for automatic dialogue We examined the validity of this method for automatic dialogue under different error rates and different dialogue strategies

Book ChapterDOI
TL;DR: This article showed that some of the characteristics of the language used in spoken dialogues are also observed in typed dialogues, since they are a reflection of the fact that it is a dialogue, rather than the fact it is spoken.
Abstract: Two interrelated points are made in this paper. First, that some of the characteristics of the language used in spoken dialogues are also observed in typed dialogues, since they are a reflection of the fact that it is a dialogue, rather than the fact that it is spoken. A corollary of this is that some of the results obtained in previous work on typed dialogue, especially Wizard of Oz simulations of human-computer natural language dialogues, can be of use to workers on spoken language dialogue systems. Second, it is claimed that we need a more fine grained taxonomy of different kinds of dialogues. Both for making it possible to make use of results obtained by other workers in developing dialogue systems, and for the development of our theoretical understanding of the influence of non-linguistic factors on the language used in human-computer dialogues. A number of such dimensions are described and their influence on the language used is illustrated by results from a empirical studies of language use.