scispace - formally typeset
Search or ask a question

Showing papers in "Technique Et Science Informatiques in 2005"


Journal ArticleDOI
TL;DR: This paper presents different concept lattices-based supervised classification methods, describing the learning and classification principle of each method, their algorithm and complexity, and experimental comparison results obtained with other classification methods.
Abstract: Supervised classification is a two-step process. The first step (learning step) consists in building a model (or classifier) describing a predetermined set of data classes. In the second step (classification step), the model is used to predict the class label of previously unseen objects. In this paper we present different concept lattices-based supervised classification methods. We describe the learning and classification principle of each method, their algorithm and complexity. We also describe experimental comparison results obtained with other classification methods, as reported in the literature. We discuss advantages and limitations of concept lattices for supervised classification.

23 citations


Journal ArticleDOI
TL;DR: This article presents a library dealing with geometry in Coq, dedicated to high-school teaching, and stresses on using a graphical interface and a drawing tool which ease the accessibility to formal statements.
Abstract: Teaching high-school mathematics using a general theorem prover becomes a reach- able goal for the near future. In this article, we present a library dealing with geometry in Coq. This library is dedicated to high-school teaching. We stress on using a graphical interface and a drawing tool which ease the accessibility to formal statements. We present some signif- icant examples of statements with figures and proofs developed with Coq. Then we discuss the difficulties encountered in this work.

22 citations


Journal ArticleDOI
TL;DR: A utilization-based test for restricted migration on uniform multiprocessors is presented where each processor sched- ules jobs using the earliest deadline first (EDF) scheduling algorithm.
Abstract: Restricted migration of periodic and sporadic tasks on uniform heterogeneous mul- tiprocessors is considered. Migration between different processors of a multiprocessor causes overhead that may be prohibitively high for real-time systems, where accurate timing is es- sential. Nonetheless, periodic tasks, which generate jobs at regular intervals, may be able to migrate without causing overhead if the migration can be controlled. In particular, if consecu- tive jobs of the same task do not share any data then they may be allowed to execute on different processors without incurring migration overhead — i.e., restricted migration may be permitted. On uniform multiprocessors, each processor has an associated speed. A job executing on a processor of speeds for t units of time will performs◊t units of work. A utilization-based test for restricted migration on uniform multiprocessors is presented where each processor sched- ules jobs using the earliest deadline first (EDF) scheduling algorithm. RESUME. Nous considerons des migrations restreintes pour des tâches periodiques et spora- diques pour des plates-formes heterogenes. Les migrations entre processeurs differents causent une surcharge qui peut potentiellement etre importante pour des systemes temps reel, ou la predictabilite est essentielle. Neanmoins, les tâches periodiques, qui generent des travaux a intervalles reguliers, peuvent migrer sans causer une surcharge, si les migrations sont contro- lees. En particulier, si des travaux consecutifs de la meme tâche ne partagent pas de donnees, ils peuvent s'executer sur des processeurs differents sans induire une surcharge lors de la mi- gration, c'est-a-dire, qu'une migration restreinte peut etre autorisee. Sur des plates-formes uni- formes, chaque processeur possede une vitesse. Un travail s'executant sur un processeur de vitesse s pendantt unites de temps progressera des ◊ t unites de travail. Un test oriente uti- lisation, pour les migrations restreintes et les plates-formes uniformes est presente ou chaque processeur ordonnance les travaux avec la strategie earliest deadline first (EDF).

13 citations


Journal ArticleDOI
TL;DR: The NAC architecture presented in this article was designed and implemented based on technologies, focusing on adaptation processing, on environment description models, on negotiation protocols, and on content transformations.
Abstract: The web is evolving towards richer contents and diverse media that are accessed with different devices through multiple kinds of network. This heterogeneous, mobile and changing environment requires that multimedia information delivered by servers be adapted to the actual conditions of use. For that purpose, a number of methods, languages, formats and protocols are developed, especially by W3C. The NAC architecture presented in this article was designed and implemented based on these technologies, focusing on adaptation processing, on environment description models, on negotiation protocols, and on content transformations. MOTS-CLES : world wide web, multimedia, terminaux mobiles, adaptation, description de contexte, transformation.

11 citations


Journal ArticleDOI
TL;DR: To tune granularity of tasks at runtime, an original algorithmic scheme based on the coupling of two algorithms, one sequential, the other one parallel and fine grain is introduced, especially suited to applications for which parallelization drastically increases the number of operations or induces a loss of performance, despite a decreasing execution time.
Abstract: To tune granularity of tasks at runtime, we introduce an original algorithmic scheme. It is based on the coupling of two algorithms, one sequential, the other one parallel and fine grain. However, parallelism is generated only in case of idleness of some processor. Then, when executing the program on a limited number of resources, the overhead related to parallelism management is bounded with no restriction of potential parallelism. It is especially suited to applications for which parallelization drastically increases the number of operations or induces a loss of performance, despite a decreasing execution time. This scheme is applied to parallelisation of two applications: gzip (Gailly, 2003) that implements Lempel-Ziv compression, a P-complete problem; and PL (ProBayes, n.d.) a probabilistic inference engine.

11 citations


Journal ArticleDOI
TL;DR: F fuzzy cognitive maps allow the specification, the control, the internal simulation and the dynamic adaptation of the perceptive behavior of an animat, and their parallel and asynchronous execution leeds to the proposal of a behavioral architecture for virtual autonomous entities.
Abstract: We are interested here in the perceptive behaviors of autonomous virtual actors. These behaviors must determine their answers, not only according to the external stimuli, but also according to internal emotions. We propose to describe such emotional behaviors using fuzzy cognitive maps, where these internal states are explicitly represented. We detail how fuzzy cognitive maps allow the specification, the control, the internal simulation and the dynamic adaptation of the perceptive behavior of an animat. Their parallel and asynchronous execution leeds to the proposal of a behavioral architecture for virtual autonomous entities. All these are illustrated by the academic example of a non-trivial interactive fiction.

9 citations


Journal ArticleDOI
TL;DR: The hardware aspects of reconfigurable computing machines are explored and significant architecture implementing the main concept are presented and the different reconfiguration paradigms will be discussed.
Abstract: Due to its potential to highly accelerate a wide variety of applications, reconfigurable computing has become a subject of a great deal of research. Its key feature is the ability to perform computations in hardware to increase performance, while retaining much of the flexibility of a software solution. In this article, we explore the hardware aspects of reconfigurable computing machines. After some definitions, design space will be explored under flexibility aspects. Significant architecture implementing the main concept are presented and the different reconfiguration paradigms will be discussed.

8 citations


Journal ArticleDOI
TL;DR: A strong cryptography-based architecture with an operating system support is presented to reach such security levels without reducing the performance and a cache line cipher and a memory verifier based on MERKLE tree hash function is added to the internal cache.
Abstract: Computers are widely used and interconnected but are not as secure as we could expect. For example, a secure execution cannot even be achieved or proved against a software (the system administrator) or hardware attacker (a logical analyzer on the computer buses). In this article a strong cryptography-based architecture with an operating system support is presented to reach such security levels without reducing the performance. A cache line cipher and a memory verifier based on MERKLE tree hash function is added to the internal cache in order to resist to various attacks and even replay attacks. Then the impact on the operating system and some applications are described.

5 citations


Journal ArticleDOI
TL;DR: This work defines a SAN formalism for discrete-time models, and presents an algorithm to generate the equivalent Markov chain, and works on Stochastic Automata Networks.
Abstract: Markov Chains facilitate the performance analysis of dynamic systems in many areas of application. They are often used through a high-level formalism. Several of these are currently used, specially for continuous-time systems, and we work on Stochastic Automata Networks (SAN). Discrete-time systems are more difficult to model, because several events can occur during the same time slot (conflicting events). We define a SAN formalism for discrete-time models, and we present an algorithm to generate the equivalent Markov chain.

4 citations


Journal ArticleDOI
TL;DR: This article presents a library named Taktuk for the deployment of applications on large sized clusters (thousands of nodes) and shows that the behavior of standard communication tools that the authors use can be modelized as a classical communication and deduce a theoretical algorithm for optimal deployment.
Abstract: This article presents a library named Taktuk for the deployment of applications on large sized clusters (thousands of nodes). This library is conceived for the development of interactive tools and thus has to complete a whole deployment in the shortest time. With this objective in mind, we show that the behavior of standard communication tools that we use can be modelized as a classical communication and we deduce from this a theoretical algorithm for optimal deployment. We then present our implementation choices that take into account the possible heterogeneity of the execution platform and the uncertainties about the value of some parameters. The evaluation of our tool highlights its near optimality and its adaptability that are essentially due to the work-stealing algorithm which is the heart of our system.

4 citations


Journal ArticleDOI
TL;DR: L'objectif de ce document est d'examiner and de classifier les differentes techniques employees afin de permettre aux systemes informatiques d'effectuer cette prise en compte, which depasse le cadre des systemes adaptatifs.
Abstract: Fruits des progres de l'informatique embarquee et des technologies de communication sans-fil, des terminaux d'un nouveau type sont recemment apparus. Disposant d'une capacite de calcul non negligeable, ils peuvent de plus echanger des informations directement avec leurs pairs ou bien par le biais d'un reseau relais. La portee de communication dont ils disposent etant limitee, le mouvement de ces terminaux joue un role immediat dans le fonctionnement de leurs echanges. La prise en compte de cette mobilite apparait alors comme une perspective interessante a developper lors de l'elaboration de systemes mettant en oeuvre ces terminaux. L'objectif de ce document est d'examiner et de classifier les differentes techniques employees afin de permettre aux systemes informatiques d'effectuer cette prise en compte. Nous verrons qu'elle depasse le cadre des systemes adaptatifs.

Journal ArticleDOI
TL;DR: L'emploi de l'outil d'aide a la preuve Coq aupres d'etudiants de DESS (3e cycle universitaire) a permis de fructueuses interactions avec le developpement de ce systeme.
Abstract: Cet article presente l'emploi de l'outil d'aide a la preuve Coq aupres d'etudiants de DESS (3e cycle universitaire). D'abord, dans le cadre d'un cours de semantique des langages, Coq facilite l'appropriation par les etudiants de notions souvent jugees abstraites en leur permettant de les relier a des termes plus concrets. Ensuite, un projet informatique utilise Coq pour traiter des problemes de plus grande envergure, faisant apparaitre par la-meme Coq comme un veritable outil de genie logiciel. Enfin, la realisation de preuves dans l'atelier Focal a permis de fructueuses interactions avec le developpement de ce systeme.

Journal ArticleDOI
TL;DR: The aim of the studies presented in this paper is to show and understand the different behaviors and the system performance variations due to the use of different file access modes for intensive reading and writing of long streams of data on modem machines with recent hard disk drives.
Abstract: The aim of the studies presented in this paper is to show and understand the different behaviors and the system performance variations due to the use of different file access modes (defined by Microsoft) for intensive reading and writing of long streams of data on modem machines with recent hard disk drives. In this paper, we shall deal with continuous accesses of sequentially stored blocks (read and write) to one or many files. One can think of accessing a video stream (video on demand) as an application example. We will also show a methodology to analyze and assess file accesses in order to identify a given system parameters, deduce the storage system behavior, and determine the best scheme to choose to optimize the performances of the read/write operations according to a set of parameters which can be the access mode, the request block size, the file size, etc.

Journal ArticleDOI
TL;DR: This work proposes here a synthesis of decidability and undecidability results for secrecy and authentication properties, and describes which tools may be used to verify the protocols.
Abstract: A cryptographic protocol is a description of message exchanges on a network. The verification of such programs has become crucial. We propose here a synthesis of decidability and undecidability results for secrecy and authentication properties. We consider several restrictions: bound on the number of sessions, on the size of messages, on the number of copies at each transition, etc. Moreover, we describe which tools may be used to verify the protocols.

Journal ArticleDOI
TL;DR: Des mecanismes relativement simples, bases sur des retransmissions et/ou des codes correcteurs d'erreurs de petite longueur, obtiennent dans ce contexte of tres bonnes performances.
Abstract: Cet article presente un ensemble de traces de pertes de paquets recueillies lors de transmissions multipoints 802.11b realisees dans des conditions de reception variables (recepteurs mobiles et fixes). Une approche originale consistant a "plaquer" a posteriori certains mecanismes de controle d'erreur sur ces observations est ensuite presentee. Cette approche permet d'evaluer les performances de ces mecanismes en fonction de leurs parametres et de certaines proprietes du canal. En particulier, il est montre que des mecanismes relativement simples, bases sur des retransmissions et/ou des codes correcteurs d'erreurs de petite longueur, obtiennent dans ce contexte de tres bonnes performances.

Journal ArticleDOI
TL;DR: It is proved that there is no heuristic with performance guarantee smaller than 6/5 for the minimization of the length of the schedule and a polynomial time algorithm is developed in the case where the lengthof the schedule is three.
Abstract: We study the problem of minimizing the makespanfor the multiprocessor scheduling problem in the presence of a hierarchical communications. We consider the problem in which all the tasks, in the precedence graph admit an unit execution time and the multiprocessor machines is constitued by an unrestricted number of machines with two identical processors each. The communication delays betveeen two adjacency tasks i and j which are executed on two differents clusters is equal to two units of time, whereas if i and j are scheduled on two differents processors on the same cluster is equal to one unit of time. In such a context, we prove that there is no heuristic with performance guarantee smaller than 6/5 (resp. 9/8) for the minimization of the length of the schedule (resp. the sum of the completion time). Moreover, we develop a polynomial time algorithm in the case where the length of the schedule is three.

Journal ArticleDOI
TL;DR: The environment allows the programmer to measure the number of CPU cycles used by a given function or the precise scheduling of the threads of the application.
Abstract: Nowadays, observing and understanding the performance of a multithreaded application is untrivial, especially within a complex thread environment (multilevel scheduling). Thanks to our environment, the run of a multithreaded application can be precisely analyzed. In particular, our environment allows the programmer to measure the number of CPU cycles used by a given function or the precise scheduling of the threads of the application.

Journal ArticleDOI
TL;DR: A characterization of several of those influent factors like the user's activity, or the repartition and size of files have been realized and the characteristics of requests in peer to peer file sharing systems and the computer power of the volonteers of peer topeer computing systems are approached.
Abstract: In order to evaluate peer to peer systems, it became necessary to understand the external influences that occur on them. In this article we studied some of these influences from the client point of view, contrary to the usual server one. A characterization of several of those influent factors like the user's activity, or the repartition and size of files have been realized. In particular, the characteristics of requests in peer to peer file sharing systems and the computer power of the volonteers of peer to peer computing systems are approached. Finally we explain the methodology followed to obtain the availability of peer to peer systems volonteers computers.

Journal ArticleDOI
TL;DR: This work considers the implementation of 16-bit floating point instructions on a Pentium 4 and a PowerPC G5 for image and media processing and shows that significant speed-ups are obtained compared to 32-bit FP versions.
Abstract: We consider the implementation of 16-bit floating point instructions on a Pentium 4 and a PowerPC G5 for image and media processing. By measuring the execution time of bench-marks with these new simulated instructions, we show that significant speed-ups are obtained compared to 32-bit FP versions. For image processing, the speed-up both comes from doubling the number ofoperations per SIMD instruction and the better cache behavior with byte storage. For data stream processing with arrays of structures, the speed-up comes from the wider SIMD instructions.

Journal ArticleDOI
TL;DR: A parallel application for seismic ray-tracing and its exploitation on an experimental computational grid built over the Renater network is presented and the gain when using Renater 3 instead of Renater 2 suggests that exploiting similar parallel applications on such grids is conceivable.
Abstract: Seismic tomography enables to model the internal structure of the Earth. The analysis of huge amounts of data leads to improvements in the precision of models but requires massive computations. We present a parallel application for seismic ray-tracing and its exploitation on an experimental computational grid built over the Renater network. The application first phase is a massively parallel ray-tracing computation in an Earth mesh, followed by an all-to-all exchange of information between participating processors. We show how the application performance evolves when the underlying network changes and we compare this performance with results obtained on a parallel computer and on a cluster. The gain when using Renater 3 instead of Renater 2 suggests that exploiting similar parallel applications on such grids is conceivable.

Journal ArticleDOI
TL;DR: A method and tools of hardware-software interface design for global memory corresponding to flexible hardware wrappers connecting the memory to the communication network and to software drivers adapting the application software to the target processors is presented.
Abstract: Grâce a l'evolution de la technologie des semi-conducteurs, aujourd'hui on peut integrer sur une seule puce ce qu'on mettait sur plusieurs puces ou cartes il y a une dizaine d'annees. Dans un futur proche, cette evolution permettra l'integration de plus de 100 Mbits de DRAM et 200 millions de portes logiques dans la meme puce. D'apres les previsions de l'association d'industrie de semi-conducteur et d'ITRS, les memoires embarquees continueront de dominer la surface des systemes monopuces dans les annees qui viennent, a peu pres 94 % de la surface totale en 2014. La conception a base de reutilisation d'IP memoire est survenue pour reduire le fosse entre cette grande capacite d'integration et la faible production de memoire. Cette solution peut etre ideale dans le cas d'une architecture homogene ou tous les elements ont les memes interfaces et utilisent les memes protocoles de communication, ce qui n'est pas le cas pour les systemes monopuces. Pour rendre cette solution efficace, le concepteur doit consacrer beaucoup d'efforts pour la specification et l'implementation des interfaces logiciel-materiel. Vu la pression du temps de mise sur le marche (" time to market "), l'automatisation de la conception de ces interfaces d'adaptation est devenue cruciale. La contribution de cette these concerne la definition d'une methode systematique permettant la conception des interfaces logiciel-materiel specifiques aux memoires globales. Ces interfaces correspondent a des adaptateurs materiels flexibles connectant la memoire au reseau de communication, et a des pilotes d'acces adaptant le logiciel de l'application aux processeurs cibles. Des experiences sur des applications de traitement d'images ont montre un gain de temps de conception important et ont prouve la flexibilite de ces interfaces ainsi que leur faible surcout en surface et en communication.

Journal ArticleDOI
TL;DR: This paper proposes to integrate digital terminal mobility into the cache management, storing data not only in the nearest cache but also on neighboring caches, depending on clients' movements.
Abstract: Digital terminal mobility is becoming more and more effective. Wth this in mind, we have been led to question existing applications, systems and networks while integrating this mobility into architectural models. Moreover, mobile equipment performance is increasing and processing video sequences is possible with good-quality restitution. Given the large scale of video sequences, delocalized management is necessary but to achieve this, new filing strategies for video data must be conceived. Indeed, storing data on a static cache located on a wired network to serve a client leaving the area, proves to be useless and will penalize the system performance. In this paper, we propose to integrate this mobility into the cache management, storing data not only in the nearest cache but also on neighboring caches, depending on clients' movements.

Journal ArticleDOI
TL;DR: This document evaluates the end-to-end communication performance with UDP and TCP protocols, and proposes a soft-handover mechanism for IP based protocols.
Abstract: Handover is the process which allows the roaming of a mobile node between two access points. This roaming implies the momentary disconnection of the mobile node, and disruptions of current communications. Several handover mechanisms have been proposed for IP based protocols. In this document, we show comparisons and analyses of these mechanisms. We evaluate the end-to-end communication performance with UDP and TCP protocols. Finally, we propose a soft-handover mechanism.

Journal ArticleDOI
TL;DR: This article proposes the platform of metrology Saturne allowing to measure the one-way delay of the packets between two points in an active end-to-end way, and shows how this measurement platform was used to validate the policy of service set up in the VTHD network.
Abstract: La qualite de service dans les reseaux est un enjeu important, tant au niveau des ISP que des utilisateurs. Garantir une qualite de service aux utilisateurs n’est pas trivial et possede un cout qui sera repercute au client. La facturation du client en fonction du service demande ou obtenu n’est possible que si l’on dispose d’outils pour mesurer la qualite de service offerte par le reseau. Dans cet article, nous proposons la plate-forme de metrologie Saturne permettant de mesurer de facon active les temps de trajet unidirectionnel des paquets entre deux points du reseau (OWD One way delay). Le but de ces mesures est l’exploitation de metriques permettant de mesurer de maniere fine le comportement des reseaux, en particulier en fonction des classes de service et d’appliquer les resultats au dimensionnement et a l’optimisation du reseau. Les mesures sont faites a partir d’un temps global obtenu grâce a des equipements GPS places dans chacun des centres de mesure. La derniere partie de l’article montre une utilisation de cette plate-forme de mesure, afin de valider le dimensionnement et la politique de service mis en place dans le reseau a differenciation de service VTHD.

Journal ArticleDOI
TL;DR: A la difference de nombreux protocoles existants, ce protocole assure la coherence forte des lignes de recouvrement formees, and permet une reprise completement asynchrone du systeme reparti en cas of panne.
Abstract: RESUME. Les protocoles de points de reprises induits par messages semblent etre l’approche la plus adaptee aux applications s’executant sur des systemes heterogenes avec un faible taux de panne. Mais ces protocoles supposent qu’il soit toujours possible de prendre un point de reprise de maniere preemptive, avant la prise en compte d’un message.Nous proposons donc, dans le cadre d’un modele a objets actifs, un protocole de tolerance aux pannes par points de reprise induits par messages adapte a la non-preemptivite des processus. A la difference de nombreux protocoles existants, ce protocole assure la coherence forte des lignes de recouvrement formees, et permet une reprise completement asynchrone du systeme reparti en cas de panne.

Journal ArticleDOI
TL;DR: A type system for heterogenous topological collections and transformations is presented, which uses a set based subtype relation and mixes static type inference and dynamic type tests.
Abstract: Topological collections are a means to view many data structures in a single framework. They can be handled into a programming language with functions defined by pattern matching called transformations. Collections and transformations are very useful in biological simulations where the collections used are often heterogenous. This means that the collections contain values of different types. We present here a type system for heterogenous topological collections and transformations, which uses a set based subtype relation and mixes static type inference and dynamic type tests.

Journal ArticleDOI
TL;DR: The problem of data routing in telecommunications networks using a path planning technique, taking into account Quality of Service constraints (QoS), and a QoS parameter for network access control based on user profile and denoted CARP guarantees a fast and secure access for the privileged network users is considered.
Abstract: In this paper, we consider the problem of data routing in telecommunications networks using a path planning technique, taking into account Quality of Service constraints (QoS). An adaptive routing genetic algorithm generates several optimal routes for a given destination. These routes are determined by the minimization of a function that depends on the number of jumps, the available bandwidth, the loss rate of the network and the transmission time that takes a packet to go from a source node to a destination in a routing process. A QoS parameter for network access control based on user profile and denoted (CARP) guarantees a fast and secure access for the privileged network users. The implementation of the CARP is based on data coding, merging and encrypting and on voice signature identification. Simulation results, which illustrate the performances of the adaptive planner, are presented.

Journal ArticleDOI
TL;DR: This article presents the formalization of the problem of component-based design and proposes an automatic correction method (called delay correction) to solve it and proposes two algorithms which perform the optimal solution in latency and area.
Abstract: The principal problem of component-based design is that the behavior of the RTL model may be incorrect. This article presents the formalization of the problem and proposes an automatic correction method (called delay correction) to solve it. We propose two algorithms which perform the optimal solution in latency and area. The effectiveness of the approach and the optimality of the proposed solutions are mathematically proven.

Journal ArticleDOI
TL;DR: This work proposes a new algorithm that sets the RED parameters and evaluates it by extensive simulations showing that the algorithm can stabilize the queue and achieve a more predictable queue size without substantially increasing the loss rate.
Abstract: Our workfocuses on an adaptive approach of RED that does not require any hypothesis on the type of traffic and thus diminishes its dependency on parameters such as bandwidth, Round Trip Time or the number of connexions. We start with a recently proposed adaptative approach, ARED, which performs a constant tuning of RED parameters according to the queue load. Our goal is to find a simple extension to ARED improving the predictability of performance measures like queueing delay and delay jitter without sacrificing the loss rate. We propose a new algorithm that sets the RED parameters and evaluate it by extensive simulations. Our results show that our algorithm can stabilize the queue and achieve a more predictable queue size without substantially increasing the loss rate. Finally, it keeps also the queue size away from buffer overflow and buffer underflow independently of the number of connections.

Journal ArticleDOI
TL;DR: It is confirmed that the audio quality depends not only on the number of FEC flows and the utility function associated to the quantity of information received, but also on the traffic conditions.
Abstract: The aim of this paper is to study the audio quality offered by a simple forward error correction (FEC) code used in audio applications like Freephone or Rat. This coding technique consists in adding to every audio packet a redundant information concerning a preceding packet which belongs to the same audio flow. Our study confirms that the audio quality depends not only on the number of FEC flows and the utility function associated to the quantity of information received, but also on the traffic conditions. Indeed, no improvement in the audio quality can be obtained for a smooth traffic whereas a marginal improvement can be observed for a bursty traffic. A significant increase of the audio quality is reached for a heavier bursty traffic. We also show that increasing the offset between the original audio packet and the packet bearing its redundancy does not improve significantly the audio quality.