scispace - formally typeset
Search or ask a question

Showing papers by "French Institute for Research in Computer Science and Automation published in 2003"


Proceedings ArticleDOI
18 Jun 2003
TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Abstract: In this paper we compare the performance of interest point descriptors. Many different descriptors have been proposed in the literature. However, it is unclear which descriptors are more appropriate and how their performance depends on the interest point detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the point detector. Our evaluation uses as criterion detection rate with respect to false positive rate and is carried out for different image transformations. We compare SIFT descriptors (Lowe, 1999), steerable filters (Freeman and Adelson, 1991), differential invariants (Koenderink ad van Doorn, 1987), complex filters (Schaffalitzky and Zisserman, 2002), moment invariants (Van Gool et al., 1996) and cross-correlation for different types of interest points. In this evaluation, we observe that the ranking of the descriptors does not depend on the point detector and that SIFT descriptors perform best. Steerable filters come second ; they can be considered a good choice given the low dimensionality.

3,362 citations


Journal ArticleDOI
TL;DR: It is proved that the result of Donoho and Huo, concerning the replacement of the /spl lscr//sup 0/ optimization problem with a linear programming problem when searching for sparse representations has an analog for dictionaries that may be highly redundant.
Abstract: The purpose of this correspondence is to generalize a result by Donoho and Huo and Elad and Bruckstein on sparse representations of signals in a union of two orthonormal bases for R/sup N/ We consider general (redundant) dictionaries for R/sup N/, and derive sufficient conditions for having unique sparse representations of signals in such dictionaries The special case where the dictionary is given by the union of L/spl ges/2 orthonormal bases for R/sup N/ is studied in more detail In particular, it is proved that the result of Donoho and Huo, concerning the replacement of the /spl lscr//sup 0/ optimization problem with a linear programming problem when searching for sparse representations, has an analog for dictionaries that may be highly redundant

1,049 citations


Journal ArticleDOI
29 Jan 2003
TL;DR: The improvements, difficulties, and successes that have occured with the synchronous languages since then are discussed.
Abstract: Twelve years ago, Proceedings of the IEEE devoted a special section to the synchronous languages. This paper discusses the improvements, difficulties, and successes that have occured with the synchronous languages since then. Today, synchronous languages have been established as a technology of choice for modeling, specifying, validating, and implementing real-time embedded applications. The paradigm of synchrony has emerged as an engineer-friendly design method based on mathematically sound tools.

927 citations


Journal ArticleDOI
08 Sep 2003
TL;DR: The main components of audiovisual automatic speech recognition (ASR) are reviewed and novel contributions in two main areas are presented: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovISual speech integration.
Abstract: Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.

790 citations


Journal ArticleDOI
TL;DR: The problem of computing and optimizing the worst-case VaR and various other partial information on the distribution, including uncertainty in factor models, support constraints, and relative entropy information can be cast as semidefinite programs.
Abstract: Classical formulations of the portfolio optimization problem, such as mean-variance or Value-at-Risk (VaR) approaches, can result in a portfolio extremely sensitive to errors in the data, such as mean and covariance matrix of the returns. In this paper we propose a way to alleviate this problem in a tractable manner. We assume that the distribution of returns is partially known, in the sense that onlybounds on the mean and covariance matrix are available. We define the worst-case Value-at-Risk as the largest VaR attainable, given the partial information on the returns' distribution. We consider the problem of computing and optimizing the worst-case VaR, and we show that these problems can be cast as semidefinite programs. We extend our approach to various other partial information on the distribution, including uncertainty in factor models, support constraints, and relative entropy information.

671 citations


Journal ArticleDOI
TL;DR: Simple methods to choose sensible starting values for the EM algorithm to get maximum likelihood parameter estimation in mixture models are compared and the simple random initialization which is probably the most employed way of initiating EM is often outperformed by strategies using CEM, SEM or shorts runs of EM before running EM.

619 citations


Proceedings ArticleDOI
01 Jul 2003
TL;DR: A novel polygonal remeshing technique that exploits a key aspect of surfaces: the intrinsic anisotropy of natural or man-made geometry, and provides the flexibility to produce meshes ranging from isotropic to anisotropic, from coarse to dense, and from uniform to curvature adapted.
Abstract: In this paper, we propose a novel polygonal remeshing technique that exploits a key aspect of surfaces: the intrinsic anisotropy of natural or man-made geometry. In particular, we use curvature directions to drive the remeshing process, mimicking the lines that artists themselves would use when creating 3D models from scratch. After extracting and smoothing the curvature tensor field of an input genus-0 surface patch, lines of minimum and maximum curvatures are used to determine appropriate edges for the remeshed version in anisotropic regions, while spherical regions are simply point sampled since there is no natural direction of symmetry locally. As a result our technique generates polygon meshes mainly composed of quads in anisotropic regions, and of triangles in spherical regions. Our approach provides the flexibility to produce meshes ranging from isotropic to anisotropic, from coarse to dense, and from uniform to curvature adapted.

614 citations


Book ChapterDOI
01 Jan 2003
TL;DR: A treebank project for French has annotated a newspaper corpus of 1 Million words with part of speech, inflection, compounds, lemmas and constituency and presents some uses of the corpus.
Abstract: We present a treebank project for French. We have annotated a newspaper corpus of 1 Million words with part of speech, inflection, compounds, lemmas and constituency. We describe the tagging and parsing phases of the project, and for each, the automatic tools, the guidelines and the validation process. We then present some uses of the corpus as well as some directions for future work.

509 citations


Book ChapterDOI
TL;DR: A protocol for evaluating verification algorithms on the BANCA database, a new large, realistic and challenging multi-modal database intended for training and testing multi- modal verification systems, is described.
Abstract: In this paper we describe the acquisition and content of a new large, realistic and challenging multi-modal database intended for training and testing multi-modal verification systems. The BANCA database was captured in four European languages in two modalities (face and voice). For recording, both high and low quality microphones and cameras were used. The subjects were recorded in three different scenarios, controlled, degraded and adverse over a period of three months. In total 208 people were captured, half men and half women. In this paper we also describe a protocol for evaluating verification algorithms on the database. The database will be made available to the research community through http://www.ee.surrey.ac.uk/Research/VSSP/banca.

470 citations


Proceedings ArticleDOI
16 Mar 2003
TL;DR: An adaptive service differentiation scheme for QoS enhancement in IEEE 802.11 wireless ad-hoc networks called adaptive enhanced distributed coordination function (AEDCF), derived from the new EDCF, which increases the medium utilization ratio and reduces for more than 50% the collision rate.
Abstract: This paper describes an adaptive service differentiation scheme for QoS enhancement in IEEE 802.11 wireless ad-hoc networks. Our approach, called adaptive enhanced distributed coordination function (AEDCF), is derived from the new EDCF introduced in the upcoming IEEE 802.11e standard. Our scheme aims to share the transmission channel efficiently. Relative priorities are provisioned by adjusting the size of the contention window (CW) of each traffic class taking into account both applications requirements and network conditions. We evaluate through simulations the performance of AEDCF and compare it with the EDCF scheme proposed in the 802.11e. Results show that AEDCF outperforms the basic EDCF, especially at high traffic load conditions. Indeed, our scheme increases the medium utilization ratio and reduces for more than 50% the collision rate. While achieving delay differentiation, the overall goodput obtained is up to 25% higher than EDCF. Moreover, the complexity of AEDCF remains similar to the EDCF scheme, enabling the design of cheap implementations.

435 citations


Journal ArticleDOI
TL;DR: This paper presents a type-based information flow analysis for a call-by-value λ-calculus equipped with references, exceptions and let-polymorphism, which it is referred to as ML, which is constraint-based and has decidable type inference.
Abstract: This paper presents a type-based information flow analysis for a call-by-value λ-calculus equipped with references, exceptions and let-polymorphism, which we refer to as ML. The type system is constraint-based and has decidable type inference. Its noninterference proof is reasonably light-weight, thanks to the use of a number of orthogonal techniques. First, a syntactic segregation between values and expressions allows a lighter formulation of the type system. Second, noninterference is reduced to subject reduction for a nonstandard language extension. Lastly, a semi-syntactic approach to type soundness allows dealing with constraint-based polymorphism separately.

Book ChapterDOI
17 Aug 2003
TL;DR: A new and efficient attack of this cryptosystem based on fast algorithms for computing Grobner basis is presented and it was possible to break the first HFE challenge in only two days of CPU time by using the new algorithm F5 implemented in C.
Abstract: In this paper, we review and explain the existing algebraic cryptanalysis of multivariate cryptosystems from the hidden field equation (HFE) family. These cryptanalysis break cryptosystems in the HFE family by solving multivariate systems of equations. In this paper we present a new and efficient attack of this cryptosystem based on fast algorithms for computing Grobner basis. In particular it was was possible to break the first HFE challenge (80 bits) in only two days of CPU time by using the new algorithm F5 implemented in C.

Journal ArticleDOI
TL;DR: In this article, a necessary and sufficient condition for the stability of the perfectly-matched layers (PML) model for a general hyperbolic system is derived from the geometrical properties of the slowness diagrams.

Journal ArticleDOI
TL;DR: A recently proposed graph theoretic model is used and extended, which helps capture the evolving characteristic of complex communications networks, in order to propose and formally analyze least cost journey in a class of dynamic networks, where the changes in the topology can be predicted in advance.
Abstract: New technologies and the deployment of mobile and nomadic services are driving the emergence of complex communications networks, that have a highly dynamic behavior. This naturally engenders new route-discovery problems under changing conditions over these networks. Unfortunately, the temporal variations in the network topology are hard to be effectively captured in a classical graph model. In this paper, we use and extend a recently proposed graph theoretic model, which helps capture the evolving characteristic of such networks, in order to propose and formally analyze least cost journey (the analog of paths in usual graphs) in a class of dynamic networks, where the changes in the topology can be predicted in advance. Cost measures investigated here are hop count (shortest journeys), arrival date (foremost journeys), and time span (fastest journeys).

Proceedings ArticleDOI
18 Jun 2003
TL;DR: This work uses simple kinematic reasoning to enumerate the tree of possible forwards/backwards flips, thus greatly speeding the search within each linked group of minima in the model-image matching cost function.
Abstract: A major difficulty for 3D (three-dimensional) human body tracking from monocular image sequences is the near nonobservability of kinematic degrees of freedom that generate motion in depth. For known link (body segment) lengths, the strict nonobservabilities reduce to twofold 'forwards/backwards flipping' ambiguities for each link. These imply 2/sup # links/ formal inverse kinematics solutions for the full model, and hence linked groups of O(2/sup # links/) local minima in the model-image matching cost function. Choosing the wrong minimum leads to rapid mistracking, so for reliable tracking, rapid methods of investigating alternative minima within a group are needed. Previous approaches to this have used generic search methods that do not exploit the specific problem structure. Here, we complement these by using simple kinematic reasoning to enumerate the tree of possible forwards/backwards flips, thus greatly speeding the search within each linked group of minima. Our methods can be used either deterministically, or within stochastic 'jump-diffusion' style search processes. We give experimental results on some challenging monocular human tracking sequences, showing how the new kinematic-flipping based sampling method improves and complements existing ones.

Journal Article
TL;DR: In this paper, the authors present a new and efficient attack of this cryptosystem based on fast algorithms for computing Grobner basis, which can break the first HFE challenge in only two days of CPU time by using the new algorithm F5 implemented in C.
Abstract: In this paper, we review and explain the existing algebraic cryptanalysis of multivariate cryptosystems from the hidden field equation (HFE) family. These cryptanalysis break cryptosystems in the HFE family by solving multivariate systems of equations. In this paper we present a new and efficient attack of this cryptosystem based on fast algorithms for computing Grobner basis. In particular it was was possible to break the first HFE challenge (80 bits) in only two days of CPU time by using the new algorithm F5 implemented in C. From a theoretical point of view we study the algebraic properties of the equations produced by instance of the HFE cryptosystems and show why they yield systems of equations easier to solve than random systems of quadratic equations of the same sizes. Moreover we are able to bound the maximal degree occuring in the Grobner basis computation. As a consequence, we gain a deeper understanding of the algebraic cryptanalysis against these cryptosystems. We use this understanding to devise a specific algorithm based on sparse linear algebra. In general, we conclude that the cryptanalysis of HFE can be performed in polynomial time. We also revisit the security estimates for existing schemes in the *FE family.

Journal ArticleDOI
19 Feb 2003
TL;DR: This paper addresses the forbidden state problem of Petri nets with liveness requirement and uncontrollable transitions with the proposed approach computes a maximally permissive PN controller, whenever such a controller exists.
Abstract: This paper addresses the forbidden state problem of Petri nets (PN) with liveness requirement and uncontrollable transitions. The proposed approach computes a maximally permissive PN controller, whenever such a controller exists. The first step, based on a Ramadge-Wonham-like reasoning (1989), determines the legal and live maximal behavior the controlled PN should have. In the second step, the theory of regions is used to design control places to add to the original model to realize the desired behavior. Furthermore, necessary and sufficient conditions for the existence of control places realizing the maximum permissive control are given. A parameterized manufacturing application of significant state space is used to show the efficiency of the proposed approach.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: The evaluation shows that local invariant descriptors are an appropriate representation for object classes such as cars, and it underlines the importance of feature selection.
Abstract: We introduce a novel method for constructing and selecting scale-invariant object parts. Scale-invariant local descriptors are first grouped into basic parts. A classifier is then learned for each of these parts, and feature selection is used to determine the most discriminative ones. This approach allows robust pan detection, and it is invariant under scale changes-that is, neither the training images nor the test images have to be normalized. The proposed method is evaluated in car detection tasks with significant variations in viewing conditions, and promising results are demonstrated. Different local regions, classifiers and feature selection methods are quantitatively compared. Our evaluation shows that local invariant descriptors are an appropriate representation for object classes such as cars, and it underlines the importance of feature selection.

Proceedings ArticleDOI
23 Jun 2003
TL;DR: This paper addresses the point-wise estimation of differential properties of a smooth manifold S--a curve in the plane or a surface in 3D--assuming a point cloud sampled over S is provided, and is among the first ones providing accurate estimates for differential quantities of order three and more.
Abstract: This paper addresses the pointwise estimation of differential properties of a smooth manifold S---a curve in the plane or a surface in 3D--- assuming a point cloud sampled over S is provided. The method consists of fitting the local representation of the manifold using a jet, by either interpolating or approximating. A jet is a truncated Taylor expansion, and the incentive for using jets is that they encode all local geometric quantities---such as normal or curvatures.On the way to using jets, the question of estimating differential properties is recasted into the more general framework of multivariate interpolation/approximation, a well-studied problem in numerical analysis. On a theoretical perspective, we prove several convergence results when the samples get denser. For curves and surfaces, these results involve asymptotic estimates with convergence rates depending upon the degree of the jet used. For the particular case of curves, an error bound is also derived. To the best of our knowledge, these results are among the first ones providing accurate estimates for differential quantities of order three and more. On the algorithmic side, we solve the interpolation/approximation problem using Vandermonde systems. Experimental results for surface of R3 are reported. These experiments illustrate the asymptotic convergence results, but also the robustness of the methods on general Computer Graphics models.

Journal ArticleDOI
TL;DR: These experiments on challenging monocular sequences show that robust cost modeling, joint and self-intersection constraints, and informed sampling are all essential for reliable monocular 3D motion estimation.
Abstract: We present a method for recovering three-dimensional (3D) human body motion from monocular video sequences based on a robust image matching metric, incorporation of joint limits and non-self-inters...

Journal ArticleDOI
TL;DR: A so-called true concurrency approach, in which no global state and no global time is available, is followed, which uses only local states in combination with a partial order model of time.
Abstract: In this paper, we consider the diagnosis of asynchronous discrete event systems. We follow a so-called true concurrency approach, in which no global state and no global time is available. Instead, we use only local states in combination with a partial order model of time. Our basic mathematical tool is that of net unfoldings originating from the Petri net research area. This study was motivated by the problem of event correlation in telecommunications network management.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: A variational framework is proposed that incorporates a small set of good features for texture segmentation based on the structure tensor and nonlinear diffusion in a level set based unsupervised segmentation process that adaptively takes into account their estimated statistical information inside and outside the region to segment.
Abstract: We propose a novel and efficient approach for active unsupervised texture segmentation. First, we show how we can extract a small set of good features for texture segmentation based on the structure tensor and nonlinear diffusion. Then, we propose a variational framework that incorporates these features in a level set based unsupervised segmentation process that adaptively takes into account their estimated statistical information inside and outside the region to segment. The approach has been tested on various textured images, and its performance is favorably compared to recent studies.

Book ChapterDOI
16 Sep 2003
TL;DR: The LogLog algorithm makes use of m "small bytes" of auxiliary memory in order to estimate in a single pass the number of distinct elements (the "cardinality") in a file, and it does so with an accuracy that is of the order of 1/ √ m.
Abstract: Using an auxiliary memory smaller than the size of this abstract, the LogLog algorithm makes it possible to estimate in a single pass and within a few percents the number of different words in the whole of Shakespeare’s works. In general the LogLog algorithm makes use of m “small bytes” of auxiliary memory in order to estimate in a single pass the number of distinct elements (the “cardinality”) in a file, and it does so with an accuracy that is of the order of 1/sqrtm. The “small bytes” to be used in order to count cardinalities till N max comprise about loglog N max bits, so that cardinalities well in the range of billions can be determined using one or two kilobytes of memory only. The basic version of the LogLog algorithm is validated by a complete analysis. An optimized version, super–LogLog, is also engineered and tested on real-life data. The algorithm parallelizes optimally.

Journal ArticleDOI
TL;DR: The notion of iconic feature based (IFB) algorithms, which lie between geometrical and standard intensity based algorithms, are introduced, and a new registration energy for IFB registration that generalizes some of the existing techniques is presented.

Journal ArticleDOI
TL;DR: A simple note detection algorithm is described that shows how one could use a harmonic matching pursuit to detect notes even in difficult situations, e.g., very different note durations, lots of reverberation, and overlapping notes.
Abstract: We introduce a dictionary of elementary waveforms, called harmonic atoms, that extends the Gabor dictionary and fits well the natural harmonic structures of audio signals. By modifying the "standard" matching pursuit, we define a new pursuit along with a fast algorithm, namely, the fast harmonic matching pursuit, to approximate N-dimensional audio signals with a linear combination of M harmonic atoms. Our algorithm has a computational complexity of O(MKN), where K is the number of partials in a given harmonic atom. The decomposition method is demonstrated on musical recordings, and we describe a simple note detection algorithm that shows how one could use a harmonic matching pursuit to detect notes even in difficult situations, e.g., very different note durations, lots of reverberation, and overlapping notes.

Journal ArticleDOI
TL;DR: The tool is based on a qualitative simulation method that employs coarse-grained models of regulatory networks and illustrated by a case study of the network of genes and interactions regulating the initiation of sporulation in Bacillus subtilis.
Abstract: Motivation: The study of genetic regulatory networks has received a major impetus from the recent development of experimental techniques allowing the measurement of patterns of gene expression in a massively parallel way. This experimental progress calls for the development of appropriate computer tools for the modeling and simulation of gene regulation processes. Results: We present Genetic Network Analyzer (GNA), a computer tool for the modeling and simulation of genetic regulatory networks. The tool is based on a qualitative simulation method that employs coarse-grained models of regulatory networks. The use of GNA is illustrated by a case study of the network of genes and interactions regulating the initiation of sporulation in Bacillus subtilis. Availability: GNA and the model of the sporulation network are available at http://www-helix.inrialpes.fr/gna.

Journal ArticleDOI
TL;DR: This paper defines a sequent calculus modulo that gives a proof-theoretic account of the combination of computations and deductions and gives a complete proof search method, called extended narrowing and resolution (ENAR), for theorem proving modulo such congruences.
Abstract: Deduction modulo is a way to remove computational arguments from proofs by reasoning modulo a congruence on propositions. Such a technique, issued from automated theorem proving, is of general interest because it permits one to separate computations and deductions in a clean way. The first contribution of this paper is to define a sequent calculus modulo that gives a proof-theoretic account of the combination of computations and deductions. The congruence on propositions is handled through rewrite rules and equational axioms. Rewrite rules apply to terms but also directly to atomic propositions. The second contribution is to give a complete proof search method, called extended narrowing and resolution (ENAR), for theorem proving modulo such congruences. The completeness of this method is proved with respect to provability in sequent calculus modulo. An important application is that higher-order logic can be presented as a theory in deduction modulo. Applying the ENAR method to this presentation of higher-order logic subsumes full higher-order resolution.

Journal ArticleDOI
TL;DR: A new deformable model based on non-linear elasticity, anisotropic behavior, and the finite element method is proposed that is valid for large displacements and improves the realism of the deformations and solves the problems related to the shortcomings of linear elasticity.
Abstract: In this paper, we describe the latest developments of the minimally invasive hepatic surgery simulator prototype developed at INRIA. A key problem with such a simulator is the physical modeling of soft tissues. We propose a new deformable model based on non-linear elasticity, anisotropic behavior, and the finite element method. This model is valid for large displacements, which means in particular that it is invariant with respect to rotations. This property improves the realism of the deformations and solves the problems related to the shortcomings of linear elasticity, which is only valid for small displacements. We also address the problem of volume variations by adding to our model incompressibility constraints. Finally, we demonstrate the relevance of this approach for the real-time simulation of laparoscopic surgical gestures on the liver.

Proceedings ArticleDOI
20 May 2003
TL;DR: A new algorithm OPIC is introduced that works on-line, and uses much less resources, and does not require storing the link matrix, and is used to focus crawling to the most interesting pages.
Abstract: The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web. Page importance, or page rank is defined as the fixpoint of a matrix equation. Previous algorithms compute it off-line and require the use of a lot of extra CPU as well as disk resources (e.g. to store, maintain and read the link matrix). We introduce a new algorithm OPIC that works on-line, and uses much less resources. In particular, it does not require storing the link matrix. It is on-line in that it continuously refines its estimate of page importance while the web/graph is visited. Thus it can be used to focus crawling to the most interesting pages. We prove the correctness of OPIC. We present Adaptive OPIC that also works on-line but adapts dynamically to changes of the web. A variant of this algorithm is now used by Xyleme.We report on experiments with synthetic data. In particular, we study the convergence and adaptiveness of the algorithms for various scheduling strategies for the pages to visit. We also report on experiments based on crawls of significant portions of the web.

Journal ArticleDOI
TL;DR: The aim of the present article is to review and summarize these formal, correct-by-construction, design transformations of system specifications (morphisms) that preserve the intended semantics and stated properties of the architecture under design.
Abstract: Rising complexities and performances of integrated circuits and systems, shortening time-to-market demands for electronic equipments, growing installed bases of intellectual property (IP), requirements for adapting existing IP blocks with new services, all stress high-level design as a prominent research topic and call for the development of appropriate methodological solutions. In this aim, system design based on the so-called "synchronous hypothesis" consists of abstracting the nonfunctional implementation details of a system and lets one benefit from a focused reasoning on the logics behind the instants at which the system functionalities should be secured. With this point of view, synchronous design models and languages provide intuitive (ontological) models for integrated circuits. This affinity explains the ease of generating synchronous circuits and verify their functionalities using compilers and related tools that implement this approach. In the relational mathematical model behind the design language SIGNAL, this affinity goes beyond the domain of purely synchronous circuits, and embraces the context of complex architectures consisting of synchronous circuits and desynchronization protocols: globally asynchronous and locally synchronous architectures (GALS). The unique features of the relational model behind SIGNAL are to provide the notion of polychrony: the capability to describe circuits and systems with several clocks; and to support refinement: the ability to assist and support system design from the early stages of requirement specification, to the later stages of synthesis and deployment. The SIGNAL model provides a design methodology that forms a continuum from synchrony to asynchrony, from specification to implementation, from abstraction to concretization, from interfaces to implementations. SIGNAL gives the opportunity to seamlessly model circuits and devices at multiple levels of abstractions, by implementing mechanisms found in many hardware simulators, while reasoning within a simple and formally defined mathematical model. In the same manner, the flexibility inherent to the abstract notion of signal, handled in the synchronous-desynchronized design model of SIGNAL, invites and favors the design of correct by construction systems by means of well-defined transformations of system specifications (morphisms) that preserve the intended semantics and stated properties of the architecture under design. The aim of the present article is to review and summarize these formal, correct-by-construction, design transformations. Most of them are implemented in the POLYCHRONY tool-set, allowing for a mixed bottom–up and top–down design of an embedded hardware–software system using the SIGNAL design language.