scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2003"


Book ChapterDOI
08 Sep 2003
TL;DR: In this article, an animation and model checking tool for the B method is presented, called PROB, which allows users to gain confidence in their specifications, and unlike the animator provided by the B-Toolkit, the user does not have to guess the right values for the operation arguments or choice variables.
Abstract: We present PROB, an animation and model checking tool for the B method. PROB’s animation facilities allow users to gain confidence in their specifications, and unlike the animator provided by the B-Toolkit, the user does not have to guess the right values for the operation arguments or choice variables. PROB contains a model checker and a constraint-based checker, both of which can be used to detect various errors in B specifications. We present our first experiences in using PROB on several case studies, highlighting that PROB enables users to uncover errors that are not easily discovered by existing tools.

496 citations


Book ChapterDOI
22 Sep 2003
TL;DR: This paper proposes a goal-oriented approach to architectural design based on the KAOS framework for modeling, specifying and analyzing requirements and discusses the architecture derivation process.
Abstract: Requirements and architecture are two essential inter-related products in the software lifecycle. Software architecture has long been recognized to have a profound impact on non-functional requirements about security, fault tolerance, performance, evolvability, and so forth. In spite of this, very few techniques are available to date for systematically building software architectures from functional and non-functional requirements so that such requirements are guaranteed by construction. The paper addresses this challenge and proposes a goal-oriented approach to architectural design based on the KAOS framework for modeling, specifying and analyzing requirements. After reviewing some global architectural decisions that are already involved in the requirements engineering process, we discuss our architecture derivation process. Software specifications are first derived from requirements. An abstract architectural draft is then derived from functional specifications. This draft is refined to meet domain-specific architectural constraints. The resulting architecture is then recursively refined to meet the various non-functional goals modelled and analyzed during the requirements engineering process.

218 citations


Book ChapterDOI
08 Sep 2003
TL;DR: In this article, the authors present a tool that allows to formally prove Java classes annotated with JML, an annotation language for Java that provides a framework for specifying class invariants and methods behaviours.
Abstract: This paper presents experiments on formal validation of Java applets. It describes a tool that has been developed at the Gemplus Research Labs. This tool allows to formally prove Java classes annotated with JML, an annotation language for Java that provides a framework for specifying class invariants and methods behaviours. The foundations and the main features of the tool are presented. The most innovative part of the tool is that it is tailored to be used by Java programmers, without any particular background in formal methods. To reduce the difficulty of using formal techniques, it aims to provide a user-friendly interface which hides to developers most of the formal features and provides a "Java style view" of lemmas.

127 citations


Journal ArticleDOI
01 Jul 2003
TL;DR: This paper proposes to dynamically select a suitable partitioning according to the property to be proved of declarative synchronous programs and is quite general and can be applied to other abstract interpretations.
Abstract: We apply linear relation analysis (P. Cousot and N. Halbwachs, in 5th ACM Symposium on Principles of Programming Languages, POPL'78, Tucson (Arizona), January 1978s N. Halbwachs, Y.E. Proy, and P. Roumanoff, Formal Methods in System Design, Vol. 11, No. 2, pp. 157–185, 1997) to the verification of declarative synchronous programs (N. Halbwachs, Science of Computer Programming, Special Issue on SAS'94, Vol. 31, No. 1, 1998). In this approach, state partitioning plays an important role: on one hand the precision of the results highly depends on the fineness of the partitionings on the other hand, a too much detailed partitioning may result in an exponential explosion of the analysis. In this paper, we propose to dynamically select a suitable partitioning according to the property to be proved. The presented approach is quite general and can be applied to other abstract interpretations.

113 citations


Book ChapterDOI
22 Sep 2003
TL;DR: A number of researchers have developed formal languages and associated analysis tools for software architecture and a number of the representative results are described.
Abstract: Developing a good software architecture for a complex system is a critically important step for insuring that the system will satisfy its principal objectives. Unfortunately, today descriptions of software architecture are largely based on informal “box-and-line” drawings that are often ambiguous, incomplete, inconsistent, and unanalyzable. This need not be the case. Over the past decade a number of researchers have developed formal languages and associated analysis tools for software architecture. In this paper I describe a number of the representative results from this body of work.

111 citations


Book ChapterDOI
04 Nov 2003
TL;DR: This work proposes a new specification language for writing complementary machine-checkable descriptions of SOAP-based security protocols and their properties called TulaFale, based on the pi calculus, plus XML syntax, logical predicates, and correspondence assertions to specify authentication goals of protocols.
Abstract: Web services security specifications are typically expressed as a mixture of XML schemas, example messages, and narrative explanations. We propose a new specification language for writing complementary machine-checkable descriptions of SOAP-based security protocols and their properties. Our TulaFale language is based on the pi calculus (for writing collections of SOAP processors running in parallel), plus XML syntax (to express SOAP messaging), logical predicates (to construct and filter SOAP messages), and correspondence assertions (to specify authentication goals of protocols). Our implementation compiles TulaFale into the applied pi calculus, and then runs Blanchet’s resolution-based protocol verifier. Hence, we can automatically verify authentication properties of SOAP protocols.

85 citations


Journal ArticleDOI
01 Nov 2003
TL;DR: With this application, it is shown that symbolic model checking tools like HyTech, originally designed for the verification of hybrid systems, can be applied successfully to new classes of infinite-state systems of practical interest.
Abstract: We propose a new method for the parameterized verification of formal specifications of cache coherence protocols. The goal of parameterized verification is to establish system properties for an arbitrary number of caches. In order to achieve this purpose we define abstractions that allow us to reduce the original parameterized verification problem to a control state reachability problem for a system with integer data variables. Specifically, the methodology we propose consists of the following steps. We first define an abstraction in which we only keep track of the number of caches in a given state during the execution of a protocol. Then, we use linear arithmetic constraints to symbolically represent infinite sets of global states of the resulting abstract protocol. For reasons of efficiency, we relax the constraint operations by interpreting constraints over real numbers. Finally, we check parameterized safety properties of abstract protocols using symbolic backward reachability, a strategy that allows us to obtain sufficient conditions for termination for an interesting class of protocols. The latter problem can be solved by using the infinite-state model checker HyTech: Henzinger, Ho, and Wong-Toi, “A model checker for hybrid systems,” Proc. of the 9th International Conference on Computer Aided Verification (CAV'97), Lecture Notes in Computer Science, Springer, Haifa, Israel, 1997, Vol. 1254, pp. 460–463. HyTech handles linear arithmetic constraints using the polyhedra library of Halbwachs and Proy, “Verification of real-time systems using linear relation analysis,” Formal Methods in System Design, Vol. 11, No. 2, pp. 157–185, 1997. By using this methodology, we have automatically validated parameterized versions of widely implemented write-invalidate and write-update cache coherence protocols like Synapse, MESI, MOESI, Berkeley, Illinois, Firefly and Dragon (Handy, The Cache Memory Book, Academic Press, 1993). With this application, we have shown that symbolic model checking tools like HyTech, originally designed for the verification of hybrid systems, can be applied successfully to new classes of infinite-state systems of practical interest.

84 citations


Journal ArticleDOI
15 Feb 2003
TL;DR: In this article, geometric representations of words are realized as vectors in a high dimensional semantic space, which is automatically constructed from a text corpus, and shows that information flow contributes significantly to query model effectiveness, particularly with respect to precision.
Abstract: Humans can make hasty, but generally robust judgements about what a text fragment is, or is not, about. Such judgements are termed information inference. This article furnishes an account of information inference from a psychologistic stance. By drawing on theories from nonclassical logic and applied cognition, an information inference mechanism is proposed that makes inferences via computations of information flow through an approximation of a conceptual space. Within a conceptual space information is represented geometrically. In this article, geometric representations of words are realized as vectors in a high dimensional semantic space, which is automatically constructed from a text corpus. Two approaches were presented for priming vector representations according to context. The first approach uses a concept combination heuristic to adjust the vector representation of a concept in the light of the representation of another concept. The second approach computes a prototypical concept on the basis of exemplar trace texts and moves it in the dimensional space according to the context. Information inference is evaluated by measuring the effectiveness of query models derived by information flow computations. Results show that information flow contributes significantly to query model effectiveness, particularly with respect to precision. Moreover, retrieval effectiveness compares favorably with two probabilistic query models, and another based on semantic association. More generally, this article can be seen as a contribution towards realizing operational systems that mimic text-based human reasoning.

83 citations


Journal ArticleDOI
01 Mar 2003
TL;DR: The experience using TLA+ and TLC to verify cache-coherence protocols and the tools and techniques developed apply equally well to software and hardware designs are described.
Abstract: We have a great deal of experience using the specification language TLA+ and its model checker TLC to analyze protocols designed at Digital and Compaq (both now part of HP). The tools and techniques we have developed apply equally well to software and hardware designs. In this paper, we describe our experience using TLA+ and TLC to verify cache-coherence protocols.

75 citations


Journal ArticleDOI
01 May 2003
TL;DR: In this article, symbolic methods for satisfiability checking over the logic of equality with uninterpreted functions have been proposed for verifying abstract hardware designs, based on restricting analysis to finite instantiations of the variables and introducing Boolean-valued indicator variables for equality.
Abstract: The logic of equality with uninterpreted functions has been proposed for verifying abstract hardware designs. The ability to perform fast satisfiability checking over this logic is imperative for such verification paradigms to be successful. We present symbolic methods for satisfiability checking for this logic. The first procedure is based on restricting analysis to finite instantiations of the variables. The second procedure directly reasons about equality by introducing Boolean-valued indicator variables for equality. Theoretical and experimental evidence shows the superiority of the second approach.

73 citations


Book ChapterDOI
08 Sep 2003
TL;DR: It is argued that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.
Abstract: Well understood methods exist for developing programs from given specifications. A formal method identifies proof obligations at each development step: if all such proof obligations are discharged, a precisely defined class of errors can be excluded from the final program. For a class of "closed" systems such methods offer a gold standard against which less formal approaches can be measured. For "open" systems -those which interact with the physical world- the task of obtaining the program specification can be as challenging as the task of deriving the program. And, when a system of this class must tolerate certain kinds of unreliability in the physical world, it is still more challenging to reach confidence that the specification obtained is adequate. We argue that widening the notion of software development to include specifying the behaviour of the relevant parts of the physical world gives a way to derive the specification of a control system and also to record precisely the assumptions being made about the world outside the computer.

Book ChapterDOI
08 Sep 2003
TL;DR: The lessons learned over a thirteen year period while helping to develop the shutdown systems for the nuclear generating station at Darlington, Ontario, Canada are described.
Abstract: This paper describes the lessons we learned over a thirteen year period while helping to develop the shutdown systems for the nuclear generating station at Darlington, Ontario, Canada. We begin with a brief description of the project and then show how we modified processes and notations developed in the academic community so that they are acceptable for use in industry. We highlight some of the topics that proved to be particularly challenging and that would benefit from more in-depth study without the pressure of project deadlines.

Book ChapterDOI
04 Nov 2003
TL;DR: In this paper, the authors present an example to illustrate how to specify and analyze network attack models and take these models as input to their attack graph tools to generate attack graphs automatically and to analyze system vulnerabilities.
Abstract: Attack graphs depict ways in which an adversary exploits system vulnerabilities to achieve a desired state. System administrators use attack graphs to determine how vulnerable their systems are and to determine what security measures to deploy to defend their systems. In this paper, we present details of an example to illustrate how we specify and analyze network attack models. We take these models as input to our attack graph tools to generate attack graphs automatically and to analyze system vulnerabilities. While we have published our generation and analysis algorithms in earlier work, the presentation of our example and toolkit is novel to this paper.

Book ChapterDOI
08 Sep 2003
TL;DR: The application of the ESACS methodology and on the use of theESACS platform to a case study, namely, the Secondary Power System of the Eurofighter Typhoon aircraft.
Abstract: The complexity of embedded controllers is steadily increasing. This trend, stimulated by the continuous improvement of the computational power of hardware, demands for a corresponding increase in the capability of design and safety engineers to maintain adequate safety levels. The use of formal methods during system design has proved to be effective in several practical applications. However, the development of certain classes of applications, like, for instance, avionics systems, also requires the behaviour of a system to be analysed under certain degraded situations (e.g., when some components are not working as expected). The integration of system design activities with safety assessment and the use of formal methods, although not new, are still at an early stage. These goals are addressed by the ESACS project, a European- Union-sponsored project grouping several industrial companies from the aeronautic field. The ESACS project is developing a methodology and a platform – the ESACS platform – that helps safety engineers automating certain phases of their work. This paper reports on the application of the ESACS methodology and on the use of the ESACS platform to a case study, namely, the Secondary Power System of the Eurofighter Typhoon aircraft.

Book ChapterDOI
08 Sep 2003
TL;DR: An "event approach" used to formally develop sequential programs based on the formalism of Action Systems (and Guarded Commands) is presented, which is is interesting because it involves a large number of pointer manipulations.
Abstract: In this article, I present an "event approach" used to formally develop sequential programs. It is based on the formalism of Action Systems [6] (and Guarded Commands [7]), which is is interesting because it involves a large number of pointer manipulations.

Book ChapterDOI
22 Sep 2003
TL;DR: This work develops a software architecture based approach in which the software architecture imposed on the assembly allows for detection and recovery of COTS integration anomalies.
Abstract: Correct automatic assembly in software components is an important issue in CBSE (Commercial-Off-The-Shelf). Building a system from reusable software components or from COTS (Commercial-Off-The-Shelf) components introduces a set of problems. One of the main problems in components assembly is related to the ability to properly manage the dynamic interactions of the components. Component assembling can result in architectural mismatches when trying to integrate components with incompatible interaction behavior like deadlock and other software anomalies. This problem represents a new challenge for system developers. The issue is not only in specifying and analyzing a set of properties rather in being able to enforce them out of a set of already implemented (local) behaviors. Our answer to this problem is a software architecture based approach in which the software architecture imposed on the assembly allows for detection and recovery of COTS integration anomalies. Starting from the specification of the system to be assembled and of its properties we develop a framework which automatically derives the glue code for the set of components in order to obtain a properties-satisfying system (i.e. the failure-free version of the system).

Book ChapterDOI
04 Nov 2003
TL;DR: This paper explains the design rationale of AsmL and sketch semantics for a kernel of the language, and proposes a novel executable specification language based on the theory of Abstract State Machines.
Abstract: The Abstract State Machine Language, AsmL, is a novel executable specification language based on the theory of Abstract State Machines. AsmL is object-oriented, provides high-level mathematical data-structures, and is built around the notion of synchronous updates and finite choice. AsmL is fully integrated into the .NET framework and Microsoft development tools. In this paper, we explain the design rationale of AsmL and sketch semantics for a kernel of the language. The details will appear in the full version of the paper.

Book ChapterDOI
08 Sep 2003
TL;DR: The results detailed in this paper are a new architecture of the translation process, a way to adapt the B0 language in order to include types of the target language and a set of validated optimizations.
Abstract: This paper presents the results of the RNTL BOM project, which aimed to develop an approach to generate efficient code from B formal developments The target domain is smart card applications, in which memory and code size is an important factor The results detailed in this paper are a new architecture of the translation process, a way to adapt the B0 language in order to include types of the target language and a set of validated optimizations An assessment of the proposed approach is given through a case study, relative to the development of a Java Card Virtual Machine environment

Journal ArticleDOI
01 Mar 2003
TL;DR: The approach to the problem of integrating formal methods into an industrial design cycle is discussed, and those techniques which have found to be especially effective in an industrial setting are pointed out.
Abstract: Over the past nine years, the Formal Methods Group at the IBM Haifa Research Laboratory has made steady progress in developing tools and techniques that make the power of model checking accessible to the community of hardware designers and verification engineers, to the point where it has become an integral part of the design cycle of many teams. We discuss our approach to the problem of integrating formal methods into an industrial design cycle, and point out those techniques which we have found to be especially effective in an industrial setting.

Book ChapterDOI
08 Sep 2003
TL;DR: A novel unified semantic model of the channel based synchronisation and sensor/actuator based asynchronisation in T COZ is presented and will be used as a reference document for developing tools support for TCOZ and as a semantic foundation for proving soundness of those tools.
Abstract: Unifying Theories of Programming (UTP) can provide a formal semantic foundation not only for programming languages but also for more expressive specification languages. We believe UTP is particularly well suited for presenting the formal semantics for integrated specification languages which often have rich language constructs for state encapsulation, event communication and real-time modeling. This paper uses UTP to formalise the semantics of Timed Communicating Object Z (TCOZ) and captures some TCOZ new features for the first time. In particular, a novel unified semantic model of the channel based synchronisation and sensor/actuator based asynchronisation in TCOZ is presented. This semantic model will be used as a reference document for developing tools support for TCOZ and as a semantic foundation for proving soundness of those tools.

Book ChapterDOI
08 Sep 2003
TL;DR: Experimental results confirmed the effectiveness of the SAT-based approach to the analysis of security protocols and pave the way to its application to large protocols arising in practical applications by showing that Graphplan-based encodings are considerably smaller than linear encodations.
Abstract: In previous work we showed that automatic SAT-based model-checking techniques based on a reduction of protocol insecurity problems to satisfiability problems in propositional logic (SAT) can be used effectively to find attacks on security protocols. The approach results from the combination of a reduction of protocol insecurity problems to planning problems and well-known SAT-reduction techniques, called linear encodings, developed for planning. Experimental results confirmed the effectiveness of the approach but also showed that the time spent to generate the SAT formula largely dominates the time spent by the SAT solver to check its satisfiability. Moreover, the SAT instances generated by the tool get of unmanageable size on the most complex protocols. In this paper we explore the application of the Graphplan-based encoding technique to the analysis of security protocols and present experimental data showing that Graphplan-based encodings are considerably (i.e. up to 2 orders of magnitude) smaller than linear encodings. These results confirm the effectiveness of the SAT-based approach to the analysis of security protocols and pave the way to its application to large protocols arising in practical applications.

Journal ArticleDOI
15 Feb 2003
TL;DR: This article shows how LSI is just one example of a unitary operator, for which there are computationally more attractive alternatives, and proposes the Haar transform as such an alternative, as it is memory efficient, and can be computed in linear to sublinear time.
Abstract: When people search for documents, they eventually want content, not words. Hence, search engines should relate documents more by their underlying concepts than by the words they contain. One promising technique to do so is Latent Semantic Indexing (LSI). LSI dramatically reduces the dimension of the document space by mapping it into a space spanned by conceptual indices. Empirically, the number of concepts that can represent the documents are far fewer than the great variety of words in the textual representation. Although this almost obviates the problem of lexical matching, the mapping incurs a high computational cost compared to document parsing, indexing, query matching, and updating. This article accomplishes several things. First, it shows how the technique underlying LSI is just one example of a unitary operator, for which there are computationally more attractive alternatives. Second, it proposes the Haar transform as such an alternative, as it is memory efficient, and can be computed in linear to sublinear time. Third, it generalizes LSI by a multiresolution representation of the document space. The approach not only preserves the advantages of LSI at drastically reduced computational costs, it also opens a spectrum of possibilities for new research.

Book ChapterDOI
22 Sep 2003
TL;DR: Although architectural concepts and techniques have been considered mainly as a means of controlling the complexity of developing software, it is argued that they can play a vital role in supporting current needs for systems that can evolve and adapt, in run-time, to changes that occur in the application or business domain in which they operate.
Abstract: Although architectural concepts and techniques have been considered mainly as a means of controlling the complexity of developing software, we argue, and demonstrate, that they can play a vital role in supporting current needs for systems that can evolve and adapt, in run-time, to changes that occur in the application or business domain in which they operate.

Journal ArticleDOI
01 Nov 2003
TL;DR: It is proved that all ω-regular (linear-time) specifications can be expressed as post-μ queries, and therefore checked using symbolic forward state traversal, and it is shown that there are simple branching-time specifications that cannot be checked in this way.
Abstract: Symbolic model checking, which enables the automatic verification of large systems, proceeds by calculating expressions that represent state sets. Traditionally, symbolic model-checking tools are based on backward state traversals their basic operation is the function pre, which, given a set of states, returns the set of all predecessor states. This is because specifiers usually employ formalisms with future-time modalities, which are naturally evaluated by iterating applications of pre. It has been shown experimentally that symbolic model checking can perform significantly better if it is based, instead, on forward state traversals in this case, the basic operation is the function post, which, given a set of states, returns the set of all successor states. This is because forward state traversal can ensure that only parts of the state space that are reachable from an initial state and relevant for the satisfaction or violation of the specification are exploreds that is, errors can be detected as soon as possible. In this paper, we investigate which specifications can be checked by symbolic forward state traversal. We formulate the problems of symbolic backward and forward model checking by means of two μ-calculi. The pre-μ calculus is based on the pre operation, and the post-μ calculus is based on the post operation. These two μ-calculi induce query logics, which augment fixpoint expressions with a boolean emptiness query. Using query logics, we are able to relate and compare the symbolic backward and forward approaches. In particular, we prove that all ω-regular (linear-time) specifications can be expressed as post-μ queries, and therefore checked using symbolic forward state traversal. On the other hand, we show that there are simple branching-time specifications that cannot be checked in this way.

Book ChapterDOI
22 Sep 2003
TL;DR: This paper shows how these two issues can be addressed in practice by employing a methodology relying on the combined use of AEmilia — an architectural description language based on stochastic process algebra and queueing networks — which allows for a quick prediction, improvement, and comparison of the performance of different software architectures for a given system.
Abstract: When tackling the construction of a software system, at the software architecture design level there are two main issues related to the system performance. First, the designer may need to choose among several alternative software architectures for the system, with the choice being driven especially by performance considerations. Second, for a specific software architecture of the system, the designer may want to understand whether its performance can be improved and, if so, it would be desirable for the designer to have some diagnostic information that guide the modification of the software architecture itself. In this paper we show how these two issues can be addressed in practice by employing a methodology relying on the combined use of AEmilia — an architectural description language based on stochastic process algebra — and queueing networks — structured performance models equipped with fast solution algorithms — which allows for a quick prediction, improvement, and comparison of the performance of different software architectures for a given system. The methodology is illustrated through a case study in which a sequential architecture, a pipeline architecture, and a concurrent architecture for a compiler system are compared on the basis of typical average performance indices.

Book ChapterDOI
08 Sep 2003
TL;DR: In this article, Hoare-style inference rules on the source code level are used to prove the correctness of safety policies for memory access safety and memory read/write limitations, which can be parameterized with different safety policies and identify conditions on appropriate safety policies.
Abstract: Program certification techniques formally show that programs satisfy certain safety policies. They rely on the correctness of the safety policy which has to be established externally. In this paper we investigate an approach to show the correctness of safety policies which are formulated as a set of Hoare-style inference rules on the source code level. We develop a framework which is generic with respect to safety policies and which allows us to establish that proving the safety of a program statically guarantees dynamic safety, i.e., that the program never violates the safety property during its execution. We demonstrate our framework by proving safety policies for memory access safety and memory read/write limitations to be sound and complete. Finally, we formulate a set of generic safety inference rules which serve as the blueprint for the implementation of a verification condition generator which can be parameterized with different safety policies, and identify conditions on appropriate safety policies.

Journal ArticleDOI
John Harrison1
01 Mar 2003
TL;DR: The formal verification of some low-level mathematical software for the Intel Itanium architecture is discussed, which helps to illustrate why some features of HOL Light, in particular programmability, make it especially suitable for these applications.
Abstract: We discuss the formal verification of some low-level mathematical software for the Intel® Itanium® architecture. A number of important algorithms have been proven correct using the HOL Light theorem prover. After briefly surveying some of our formal verification work, we discuss in more detail the verification of a square root algorithm, which helps to illustrate why some features of HOL Light, in particular programmability, make it especially suitable for these applications.

Book ChapterDOI
22 Sep 2003
TL;DR: An approach for SA-based conformance testing is described: architectural tests are selected from a Labelled Transition System representing the SA behavior and are then refined into concrete tests to be executed on the implemented system.
Abstract: SAs provide a high-level model of large, complex systems using suitable abstractions of the system components and their interactions. SA dynamic descriptions can be usefully employed in testing and analysis. We describe here an approach for SA-based conformance testing: architectural tests are selected from a Labelled Transition System (LTS) representing the SA behavior and are then refined into concrete tests to be executed on the implemented system. To identify the test sequences, we derive abstract views of the LTS, called the ALTSs, to focus on relevant classes of architectural behaviors and hide away uninteresting interactions. The SA description of a Collaborative Writing system is used as an example of application. We also briefly discuss the relation of our approach with some recent research in exploiting the standard UML notation as an Architectural Description Language, and in conformance testing of reactive systems.

Book ChapterDOI
08 Sep 2003
TL;DR: This work presents a formal model of TCP/IP networks and describes some well-known attacks using the model, which enables better understanding of the vulnerabilities and supports the design of tougher detection, protection, and testing tools.
Abstract: The TCP/IP protocol suite has been designed to provide a simple, open communication infrastructure in an academic, collaborative environment. Therefore, the TCP/IP protocols are not able to provide the authentication, integrity, and privacy mechanisms to protect communication in a hostile environment. To solve these security problems, a number of application-level protocols have been designed and implemented on top of TCP/IP. In addition, ad hoc techniques have been developed to protect networks from TCP/IP-based attacks. Nonetheless, a formal approach to TCP/IP security is still lacking. This work presents a formal model of TCP/IP networks and describes some well-known attacks using the model. The topological characterization of TCP/IP-based attacks enables better understanding of the vulnerabilities and supports the design of tougher detection, protection, and testing tools.

Book ChapterDOI
22 Sep 2003
TL;DR: This paper describes and illustrates the application of dependence analysis to architectural descriptions of software systems and shows promise at this level.
Abstract: As the focus of software design shifts increasingly toward the architectural level, so too are its analysis techniques. Dependence analysis is one such technique that shows promise at this level. In this paper we briefly describe and illustrate the application of dependence analysis to architectural descriptions of software systems.