scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2004"


Journal ArticleDOI
TL;DR: A comparative evaluation of different detectors is presented and it is shown that the proposed approach for detecting interest points invariant to scale and affine transformations provides better results than existing methods.
Abstract: In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix. Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point. We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching resultss the image is described by a set of scale/affine invariant descriptors computed on the regions associated with our points.

4,107 citations


Journal ArticleDOI
TL;DR: An introduction proposes a modular scheme of the training and test phases of a speaker verification system, and the most commonly speech parameterization used in speaker verification, namely, cepstral analysis, is detailed.
Abstract: This paper presents an overview of a state-of-the-art text-independent speaker verification system. First, an introduction proposes a modular scheme of the training and test phases of a speaker verification system. Then, the most commonly speech parameterization used in speaker verification, namely, cepstral analysis, is detailed. Gaussian mixture modeling, which is the speaker modeling technique used in most systems, is then explained. A few speaker modeling alternatives, namely, neural networks and support vector machines, are mentioned. Normalization of scores is then explained, as this is a very important step to deal with real-world data. The evaluation of a speaker verification system is then detailed, and the detection error trade-off (DET) curve is explained. Several extensions of speaker verification are then enumerated, including speaker tracking and segmentation by speakers. Then, some applications of speaker verification are proposed, including on-site applications, remote applications, applications relative to structuring audio information, and games. Issues concerning the forensic area are then recalled, as we believe it is very important to inform people about the actual performance and limitations of speaker verification systems. This paper concludes by giving a few research trends in speaker verification for the next couple of years.

874 citations


Book ChapterDOI
11 May 2004
TL;DR: A novel method for human detection in single images which can detect full bodies as well as close-up views in the presence of clutter and occlusion is described.
Abstract: We describe a novel method for human detection in single images which can detect full bodies as well as close-up views in the presence of clutter and occlusion. Humans are modeled as flexible assemblies of parts, and robust part detection is the key to the approach. The parts are represented by co-occurrences of local features which captures the spatial layout of the partrsquos appearance. Feature selection and the part detectors are learnt from training images using AdaBoost. The detection algorithm is very efficient as (i) all part detectors use the same initial features, (ii) a coarse-to-fine cascade approach is used for part detection, (iii) a part assembly strategy reduces the number of spurious detections and the search space. The results outperform existing human detectors.

746 citations


Proceedings ArticleDOI
04 Oct 2004
TL;DR: An Adaptive ARF (AARF) algorithm for low latency systems that improves upon ARF to provide both short-term and long-term adaptation and a new rate adaptation algorithm designed for high latency Systems that has been implemented and evaluated on an AR5212-based device.
Abstract: Today, three different physical (PHY) layers for the IEEE 802.11 WLAN are available (802.11a/b/g); they all provide multi-rate capabilities. To achieve a high performance under varying conditions, these devices need to adapt their transmission rate dynamically. While this rate adaptation algorithm is a critical component of their performance, only very few algorithms such as Auto Rate Fallback (ARF) or Receiver Based Auto Rate (RBAR) have been published and the implementation challenges associated with these mechanisms have never been publicly discussed. In this paper, we first present the important characteristics of the 802.11 systems that must be taken into account when such algorithms are designed. Specifically, we emphasize the contrast between low latency and high latency systems, and we give examples of actual chipsets that fall in either of the different categories. We propose an Adaptive ARF (AARF) algorithm for low latency systems that improves upon ARF to provide both short-term and long-term adaptation. The new algorithm has very low complexity while obtaining a performance similar to RBAR, which requires incompatible changes to the 802.11 MAC and PHY protocol. Finally, we present a new rate adaptation algorithm designed for high latency systems that has been implemented and evaluated on an AR5212-based device. Experimentation results show a clear performance improvement over the algorithm previously implemented in the AR5212 driver we used.

723 citations


Journal ArticleDOI
01 Aug 2004
TL;DR: This work uses the bilateral filter to decompose the images into detail and large scale and reconstruct the image using the large scale of the available lighting and the detail of the flash to enhance photographs shot in dark environments.
Abstract: We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography.

672 citations


Journal ArticleDOI
TL;DR: It is shown that an efficient face detection system does not require any costly local preprocessing before classification of image areas, and provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases.
Abstract: In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to /spl plusmn/20 degrees in image plane and turned up to /spl plusmn/60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns.

610 citations


Book ChapterDOI
02 May 2004
TL;DR: In this paper, it was shown that low-degree relations have been found for several well known constructions of stream ciphers immune to all previously known attacks and that such relations may be derived by multiplying the output function of a stream cipher by a well chosen low degree function such that the product function is again of low degree.
Abstract: Algebraic attacks on LFSR-based stream ciphers recover the secret key by solving an overdefined system of multivariate algebraic equations. They exploit multivariate relations involving key bits and output bits and become very efficient if such relations of low degrees may be found. Low degree relations have been shown to exist for several well known constructions of stream ciphers immune to all previously known attacks. Such relations may be derived by multiplying the output function of a stream cipher by a well chosen low degree function such that the product function is again of low degree. In view of algebraic attacks, low degree multiples of Boolean functions are a basic concern in the design of stream ciphers as well as of block ciphers.

486 citations


Proceedings Article
22 Aug 2004
TL;DR: A universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle and one-to-many relationships and circularity in entity descriptions constitute the key difficulties.
Abstract: Interoperability of heterogeneous systems on the Web will be admittedly achieved through an agreement between the underlying ontologies. However, the richer the ontology description language, the more complex the agreement process, and hence the more sophisticated the required tools. Among current ontology alignment paradigms, similarity-based approaches are both powerful and flexible enough for aligning ontologies expressed in languages like OWL. We define a universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle: Similarity depends on the type of entity and involves all the features that make its definition (such as superclasses, properties, instances, etc.). One-to-many relationships and circularity in entity descriptions constitute the key difficulties in this context: These are dealt with through local matching of entity sets and iterative computation of recursively dependent similarities, respectively.

439 citations


Journal ArticleDOI
TL;DR: Continuous Curvature Steer is the first to compute paths with: 1) continuous curvature; 2) upper-bounded curvature); and 3)upper-bounding curvature derivative, and verifies a topological property that ensures that when it is used within a general motion-planning scheme, it yields a complete collision-free path planner.
Abstract: This paper presents Continuous Curvature (CC) Steer, a steering method for car-like vehicles, i.e., an algorithm planning paths in the absence of obstacles. CC Steer is the first to compute paths with: 1) continuous curvature; 2) upper-bounded curvature; and 3) upper-bounded curvature derivative. CC Steer also verifies a topological property that ensures that when it is used within a general motion-planning scheme, it yields a complete collision-free path planner. The coupling of CC Steer with a general planning scheme yields a path planner that computes collision-free paths verifying the properties mentioned above. Accordingly, a car-like vehicle can follow such paths without ever having to stop in order to reorient its front wheels. Besides, such paths can be followed with a nominal speed which is proportional to the curvature derivative limit. The paths computed by CC Steer are made up of line segments, circular arcs, and clothoid arcs. They are not optimal in length. However, it is shown that they converge toward the optimal "Reeds and Shepp" paths when the curvature derivative upper bound tends to infinity. The capabilities of CC Steer to serve as an efficient steering method within two general planning schemes are also demonstrated.

432 citations


Journal ArticleDOI
TL;DR: The analytical form of the interaction matrix related to any moment that can be computed from segmented images is determined, based on Green's theorem, which applies to classical geometrical primitives.
Abstract: In this paper, we determine the analytical form of the interaction matrix related to any moment that can be computed from segmented images. The derivation method we present is based on Green's theorem. We apply this general result to classical geometrical primitives. We then consider using moments in image-based visual servoing. For that, we select six combinations of moments to control the six degrees of freedom of the system. These features are particularly adequate, if we consider a planar object and the configurations such that the object and camera planes are parallel at the desired position. The experimental results we present show that a correct behavior of the system is obtained if we consider either a simple symmetrical object or a planar object with complex and unknown shape.

413 citations


Journal ArticleDOI
TL;DR: Given a qualitative model of a genetic regulatory network, consisting of a system of PL differential equations and inequality constraints on the parameter values, the method produces a graph of qualitative states and transitions between qualitative states, summarizing the qualitative dynamics of the system.

Book ChapterDOI
07 Nov 2004
TL;DR: A format for expressing alignments in RDF is presented, which can be seen as an extension of the OWL API and shares some design goals with it, and how this API can be used for effectively aligning ontologies and completing partial alignments, thresholding alignments or generating axioms and transformations is shown.
Abstract: Ontologies are seen as the solution to data heterogeneity on the web. However, the available ontologies are themselves source of heterogeneity. This can be overcome by aligning ontologies, or finding the correspondence between their components. These alignments deserve to be treated as objects: they can be referenced on the web as such, be completed by an algorithm that improves a particular alignment, be compared with other alignments and be transformed into a set of axioms or a translation program. We present here a format for expressing alignments in RDF, so that they can be published on the web. Then we propose an implementation of this format as an Alignment API, which can be seen as an extension of the OWL API and shares some design goals with it. We show how this API can be used for effectively aligning ontologies and completing partial alignments, thresholding alignments or generating axioms and transformations.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: This work describes a learning based method for recovering 3D human body pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes, and results are a factor of 3 better than the current state of the art for the much simpler upper body problem.
Abstract: We describe a learning based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body pans in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. For the main regression, we evaluate both regularized least squares and relevance vector machine (RVM) regressors over both linear and kernel bases. The RVM's provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. For realism and good generalization with respect to viewpoints, we train the regressors on images resynthesized from real human motion capture data, and test it both quantitatively on similar independent test data, and qualitatively on a real image sequence. Mean angular errors of 6-7 degrees are obtained - a factor of 3 better than the current state of the art for the much simpler upper body problem.

Journal ArticleDOI
TL;DR: A surprisingly simple framework for the random generation of combinatorial configurations based on what the authors call Boltzmann models is proposed, which can be implemented easily, be analysed mathematically with great precision, and, when suitably tuned, tend to be very efficient in practice.
Abstract: This article proposes a surprisingly simple framework for the random generation of combinatorial configurations based on what we call Boltzmann models. The idea is to perform random generation of possibly complex structured objects by placing an appropriate measure spread over the whole of a combinatorial class – an object receives a probability essentially proportional to an exponential of its size. As demonstrated here, the resulting algorithms based on real-arithmetic operations often operate in linear time. They can be implemented easily, be analysed mathematically with great precision, and, when suitably tuned, tend to be very efficient in practice.

Book ChapterDOI
31 Aug 2004
TL;DR: In this paper, a process algebra in the style of CCS is presented, where processes can backtrack, just as plain forward computation, and incurs no additional cost on the communication structure.
Abstract: One obtains in this paper a process algebra RCCS, in the style of CCS, where processes can backtrack. Backtrack, just as plain forward computation, is seen as a synchronization and incurs no additional cost on the communication structure. It is shown that, given a past, a computation step can be taken back if and only if it leads to a causally equivalent past.

Proceedings ArticleDOI
10 Oct 2004
TL;DR: It is shown that when graphs are bigger than twenty vertices, the matrix-based visualization performs better than node- link diagrams on most tasks, and only path finding is consistently in favor of node-link diagrams throughout the evaluation.
Abstract: In this paper, we describe a taxonomy of generic graph related tasks and an evaluation aiming at assessing the readability of two representations of graphs: matrix-based representations and node-link diagrams. This evaluation bears on seven generic tasks and leads to important recommendations with regard to the representation of graphs according to their size and density. For instance, we show that when graphs are bigger than twenty vertices, the matrix-based visualization performs better than node-link diagrams on most tasks. Only path finding is consistently in favor of node-link diagrams throughout the evaluation

Proceedings ArticleDOI
18 Oct 2004
TL;DR: This paper presents a generic framework to implement reliable and efficient peer sampling services, which generalizes existing approaches and makes it easy to introduce new ones, and shows that all of them lead to differentpeer sampling services none of which is uniformly random.
Abstract: In recent years, the gossip-based communication model in large-scale distributed systems has become a general paradigm with important applications which include information dissemination, aggregation, overlay topology management and synchronization. At the heart of all of these protocols lies a fundamental distributed abstraction: the peer sampling service. In short, the aim of this service is to provide every node with peers to exchange information with. Analytical studies reveal a high reliability and efficiency of gossip-based protocols, under the (often implicit) assumption that the peers to send gossip messages to are selected uniformly at random from the set of all nodes. In practice -- instead of requiring all nodes to know all the peer nodes so that a random sample could be drawn -- a scalable and efficient way to implement the peer sampling service is by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself.This paper presents a generic framework to implement reliable and efficient peer sampling services. The framework generalizes existing approaches and makes it easy to introduce new ones. We use this framework to explore and compare several implementations of our abstraction. Through extensive experimental analysis, we show that all of them lead to different peer sampling services none of which is uniformly random. This clearly renders traditional theoretical approaches invalid, when the underlying peer sampling service is based on a gossip-based scheme. Our observations also help explain important differences between design choices of peer sampling algorithms, and how these impact the reliability of the corresponding service.

Journal ArticleDOI
TL;DR: The QoS limitations of IEEE 802.11 wireless MAC layers are analyzed and different QoS enhancement techniques proposed for802.11 WLAN are described and classified along with their advantages/drawbacks.
Abstract: *Summary Quality of service (QoS) is a key problem of today’s IP networks. Many frameworks (IntServ, DiffServ, MPLS, etc.) have been proposed to provide service differentiation in the Internet. At the same time, the Internet is becoming more and more heterogeneous due to the recent explosion of wireless networks. In wireless environments, bandwidth is scarce and channel conditions are time-varying and sometimes highly lossy. Many previous research works show that what works well in a wired network cannot be directly applied in the wireless environment. Although IEEE 802.11 wireless LAN (WLAN) is the most widely used WLAN standard today, it cannot provide QoS support for the increasing number of multimedia applications. Thus, a large number of 802.11 QoS enhancement schemes have been proposed, each one focusing on a particular mode. This paper summarizes all these schemes and presents a survey of current research activities. First, we analyze the QoS limitations of IEEE 802.11 wireless MAC layers. Then, different QoS enhancement techniques proposed for 802.11 WLAN are described and classified along with their advantages/drawbacks. Finally, the upcoming IEEE 802.11e QoS enhancement standard is introduced and studied in detail.

Journal Article
TL;DR: A process algebra RCCS, in the style of CCS, where processes can backtrack is obtained, and it is shown that, given a past, a computation step can be taken back if and only if it leads to a causally equivalent past.
Abstract: One obtains in this paper a process algebra RCCS, in the style of CCS, where processes can backtrack. Backtrack, just as plain forward computation, is seen as a synchronization and incurs no additional cost on the communication structure. It is shown that, given a past, a computation step can be taken back if and only if it leads to a causally equivalent past.

Journal ArticleDOI
TL;DR: An efficient algorithm based on interval analysis that allows us to solve the forward kinematics, i.e., to determine all the possible poses of the platform for given joint coordinates, which is competitive in term of computation time with a real-time algorithm such as the Newton scheme, while being safer.
Abstract: We consider in this paper a Gough-type parallel robot and we present an efficient algorithm based on interval analysis that allows us to solve the forward kinematics, i.e., to determine all the possible poses of the platform for given joint coordinates. This algorithm is numerically robust as numerical round-off errors are taken into account; the provided solutions are either exact in the sense that it will be possible to refine them up to an arbitrary accuracy or they are flagged only as a “possible” solution as either the numerical accuracy of the computation does not allow us to guarantee them or the robot is in a singular configuration. It allows us to take into account physical and technological constraints on the robot (for example, limited motion of the passive joints). Another advantage is that, assuming realistic constraints on the velocity of the robot, it is competitive in term of computation time with a real-time algorithm such as the Newton scheme, while being safer.

Journal ArticleDOI
TL;DR: A differential-geometric framework to define PDEs acting on some manifold constrained datasets, including the case of images taking value into matrix manifolds defined by orthogonal and spectral constraints is proposed.
Abstract: Nonlinear diffusion equations are now widely used to restore and enhance images. They allow to eliminate noise and artifacts while preserving large global features, such as object contours. In this context, we propose a differential-geometric framework to define PDEs acting on some manifold constrained datasets. We consider the case of images taking value into matrix manifolds defined by orthogonal and spectral constraints. We directly incorporate the geometry and natural metric of the underlying configuration space (viewed as a Lie group or a homogeneous space) in the design of the corresponding flows. Our numerical implementation relies on structure-preserving integrators that respect intrinsically the constraints geometry. The efficiency and versatility of this approach are illustrated through the anisotropic smoothing of diffusion tensor volumes in medical imaging.

Book ChapterDOI
24 May 2004
TL;DR: Fractal as mentioned in this paper is a hierarchical and reflective component model with sharing that allows fine-grained manipulation of the internal structure of components, from black-boxes to components with arbitrary reflective capabilities.
Abstract: This paper presents Fractal, a hierarchical and reflective component model with sharing. Components in this model can be endowed with arbitrary reflective capabilities, from black-boxes to components that allow a fine-grained manipulation of their internal structure. The paper describes Julia, a Java implementation of the model, a small but efficient run-time framework, which relies on a combination of interceptors and mixins for the programming of reflective features of components. The paper presents a qualitative and quantitative evaluation of this implementation, showing that component-based programming in Fractal can be made very efficient.

Proceedings Article
01 Sep 2004
TL;DR: Numerical simulations are reported on illustrating the potentialities and limitations of EMD in two signal processing tasks, namely detrending and denoising.
Abstract: Empirical Mode Decomposition (EMD) has recently been introduced as a local and fully data-driven technique aimed at decomposing nonstationary multicomponent signals in “intrinsic” AM-FM contributions. Although the EMD principle is appealing and its implementation easy, performance analysis is difficult since no analytical description of the method is available. We will here report on numerical simulations illustrating the potentialities and limitations of EMD in two signal processing tasks, namely detrending and denoising. In both cases, the idea is to make use of partial reconstructions, the relevant modes being selected on the basis of the statistical properties of modes that have been empirically established.

Proceedings ArticleDOI
27 Sep 2004
TL;DR: This paper proposes two new control schemes based on efficient second-order minimization techniques which perform like the Newton minimization algorithm that has a high convergence rate.
Abstract: In this paper, several vision-based robot control methods are classified following an analogy with well known minimization methods. Comparing the rate of convergence between minimization algorithms helps us to understand the difference of performance of the control schemes. In particular, it is shown that standard vision-based control methods have in general low rates of convergence. Thus, the performance of vision-based control could be improved using schemes which perform like the Newton minimization algorithm that has a high convergence rate. Unfortunately, the Newton minimization method needs the computation of second derivatives that can be ill-conditioned causing convergence problems. In order to solve these problems, this paper proposes two new control schemes based on efficient second-order minimization techniques.

Journal ArticleDOI
TL;DR: In this article, a nonlinear projection on subsemimodules is introduced, where the projection of a point is the maximal approximation from below of the point in the sub-semimmodule.

Journal ArticleDOI
TL;DR: It is shown that the amplitude distribution of the complex wave, the real and the imaginery components of which are assumed to be distributed by the /spl alpha/-stable distribution, is a generalization of the Rayleigh distribution.
Abstract: Synthetic aperture radar (SAR) imagery has found important applications due to its clear advantages over optical satellite imagery one of them being able to operate in various weather conditions. However, due to the physics of the radar imaging process, SAR images contain unwanted artifacts in the form of a granular look which is called speckle. The assumptions of the classical SAR image generation model lead to a Rayleigh distribution model for the histogram of the SAR image. However, some experimental data such as images of urban areas show impulsive characteristics that correspond to underlying heavy-tailed distributions, which are clearly non-Rayleigh. Some alternative distributions have been suggested such as the Weibull, log-normal, and the k-distribution which had success in varying degrees depending on the application. Recently, an alternative model namely the /spl alpha/-stable distribution has been suggested for modeling radar clutter. In this paper, we show that the amplitude distribution of the complex wave, the real and the imaginery components of which are assumed to be distributed by the /spl alpha/-stable distribution, is a generalization of the Rayleigh distribution. We demonstrate that the amplitude distribution is a mixture of Rayleighs as is the k-distribution in accordance with earlier work on modeling SAR images which showed that almost all successful SAR image models could be expressed as mixtures of Rayleighs. We also present parameter estimation techniques based on negative order moments for the new model. Finally, we test the performance of the model on urban images and compare with other models such as Rayleigh, Weibull, and the k-distribution.

Journal ArticleDOI
TL;DR: A generic algorithm is presented, which enables one to describe all the known algorithms based on Descartes' rule of sign and the bisection strategy in a unified framework and is optimal in terms of memory usage and as fast as both Collins and Akritas' algorithm and Krandick's variant, independently of the input polynomial.

Journal ArticleDOI
TL;DR: The problem of globally uniformly asymptotically and locally exponentially stabilizing a family of nonlinear feedforward systems when there is a delay in the input is solved and explicit expressions of bounded control laws are determined.
Abstract: The problem of globally uniformly asymptotically and locally exponentially stabilizing a family of nonlinear feedforward systems when there is a delay in the input is solved. No limitation on the size of the delay is imposed. Explicit expressions of bounded control laws are determined.

Proceedings ArticleDOI
28 Jun 2004
TL;DR: This paper investigates logical formulations of noninterference that allow a more precise analysis of programs, and appears that such formulations are often sound and complete, and also amenable to interactive or automated verification techniques, such as theorem-proving or model-checking.
Abstract: Non-interference is a high-level security property that guarantees the absence of illicit information leakages through executing programs. More precisely, non-interference for a program assumes a separation between secret inputs and public inputs on the one hand, and secret outputs and public outputs on the other hand, and requires that the value of public outputs does not depend on the value of secret inputs. A common means to enforce non-interference is to use an information flow type system. However, such type systems are inherently imprecise, and reject many secure programs, even for simple programming languages. The purpose of this paper is to investigate logical formulations of noninterference that allow a more precise analysis of programs. It appears that such formulations are often sound and complete, and also amenable to interactive or automated verification techniques, such as theorem-proving or model-checking. We illustrate the applicability of our method in several scenarios, including a simple imperative language, a non-deterministic language, and finally a language with shared mutable data structures.

Proceedings ArticleDOI
22 Mar 2004
TL;DR: This paper extends previous work on a generic framework for the formal definition and interaction analysis of stateful aspects and introduces generic composition operators for aspects which enhance expressivity while preserving static analyzability of interactions.
Abstract: Aspect-Oriented Programming promises separation of concerns at the implementation level. However, aspects are not always orrthogonal and aspect interaction is a fundamental problem. In this paper, we extend previous work on a generic framework for the formal definition and interaction analysis of stateful aspects. We propose three important extensions which enhance expressivity while preserving static analyzability of interactions. First, we provide support for variables in aspects in order to share information between different execution points. This allows the definition of more precise aspects and to avoid detection of spurious conflicts. Second, we introduce generic composition operators for aspects. This enables us to provide expressive support for the resolution of conflicts among interacting aspects. Finally, we offer a means to define applicability conditions for aspects. This makes interaction analysis more precise and paves the way for reuse of aspects by making explicit requirements on contexts in which aspects must be used.