scispace - formally typeset
Search or ask a question

Showing papers by "Orange S.A. published in 2004"


Journal ArticleDOI
Christian Licoppe1
TL;DR: Empirical studies of the uses of the home telephone, the mobile phone, and mobile text messaging in France are relied on to discuss how this particular repertoire of ‘connected’ relationships has gradually crystallized as these technologies have become widespread and as each additional communication resource has been made available to users.
Abstract: The aim of this research is to understand how the transformation of the communication technoscape allows for the development of particular patterns in the construction of social bonds. It provides evidence for the development of a ‘connected’ management of relationships, in which the (physically) absent party gains presence through the multiplication of mediated communication gestures on both sides, up to the point where copresent interactions and mediated distant exchanges seem woven into a single, seamless web. After reviewing some of the current social-science research, I rely on empirical studies of the uses of the home telephone, the mobile phone, and mobile text messaging in France to discuss how this particular repertoire of ‘connected’ relationships has gradually crystallized as these technologies have become widespread and as each additional communication resource has been made available to users. I also describe how such a ‘connected’ mode coexists with a previous way of managing ‘mediated’ rela...

722 citations


Journal ArticleDOI
TL;DR: It is shown that an efficient face detection system does not require any costly local preprocessing before classification of image areas, and provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases.
Abstract: In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to /spl plusmn/20 degrees in image plane and turned up to /spl plusmn/60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns.

610 citations


Book ChapterDOI
24 May 2004
TL;DR: Fractal as mentioned in this paper is a hierarchical and reflective component model with sharing that allows fine-grained manipulation of the internal structure of components, from black-boxes to components with arbitrary reflective capabilities.
Abstract: This paper presents Fractal, a hierarchical and reflective component model with sharing. Components in this model can be endowed with arbitrary reflective capabilities, from black-boxes to components that allow a fine-grained manipulation of their internal structure. The paper describes Julia, a Java implementation of the model, a small but efficient run-time framework, which relies on a combination of interceptors and mixins for the programming of reflective features of components. The paper presents a qualitative and quantitative evaluation of this implementation, showing that component-based programming in Fractal can be made very efficient.

307 citations


Journal ArticleDOI
TL;DR: A task duplication-based scheduling algorithm for network of heterogeneous systems (TANH), with complexity O(V/sup 2/), which provides optimal results for applications represented by directed acyclic graphs (DAGs), provided a simple set of conditions on task computation and network communication time could be satisfied.
Abstract: Optimal scheduling of parallel tasks with some precedence relationship, onto a parallel machine is known to be NP-complete. The complexity of the problem increases when task scheduling is to be done in a heterogeneous environment, where the processors in the network may not be identical and take different amounts of time to execute the same task. We introduce a task duplication-based scheduling algorithm for network of heterogeneous systems (TANH), with complexity O(V/sup 2/), which provides optimal results for applications represented by directed acyclic graphs (DAGs), provided a simple set of conditions on task computation and network communication time could be satisfied. The performance of the algorithm is illustrated by comparing the scheduling time with an existing "best imaginary level scheduling (BIL)" scheme for heterogeneous systems. The scalability for a higher or lower number of processors, as per their availability is also discussed. We have shown to provide substantial improvement over existing work on the task duplication-based scheduling algorithm (TDS).

283 citations


Book ChapterDOI
09 Aug 2004
TL;DR: This paper explains in details how to extract the whole AES secret key embedded in such a white box AES implementation, with negligible memory and worst time complexity 230.
Abstract: The white box attack context as described in [1, 2] is the common setting where cryptographic software is executed in an untrusted environment—ie an attacker has gained access to the implementation of cryptographic algorithms, and can observe or manipulate the dynamic execution of whole or part of the algorithms. In this paper, we present an efficient practical attack against the obfuscated AES implementation [1] proposed at SAC 2002 as a means to protect AES software operated in the white box context against key exposure. We explain in details how to extract the whole AES secret key embedded in such a white box AES implementation, with negligible memory and worst time complexity 230.

234 citations


Patent
20 Sep 2004
TL;DR: In this paper, a method for controlling a transmitting power for a transmitter in a wireless communications network consisting in identifying entities adjacent to said transmitter, identifying amongst said adjacent entities a minimum constellation associated to the transmitter, and adjusting the transmitting power of the transmitter to a minimum value, thereby enabling a message transmitted thereby to simultaneously reach the entities of the minimum constellation, peripherial entities whose minimum constellation includes the transmitter and each identified peripherial entity.
Abstract: The invention relates to a method for controlling a transmitting power for a transmitter in a wire-less communications network consisting in identifying entities adjacent to said transmitter, identifying amongst said adjacent entities a minimum constellation associated to the transmitter, identifying, if necessary amongst the adjacent entities not belonging to said minimum constellation, peripherial entities whose minimum constellation includes the transmitter and in adjusting the transmitting power of the transmitter to a minimum value, thereby enabling a message transmitted thereby to simultaneously reach the entities of the minimum constellation associated to said transmitter and each identified peripherial entity.

185 citations


Journal ArticleDOI
TL;DR: In this article, a new heuristic approach for minimizing the operating path of automated or computer numerically controlled drilling operations is described, which is first defined as a travelling salesman problem.
Abstract: A new heuristic approach for minimizing the operating path of automated or computer numerically controlled drilling operations is described. The operating path is first defined as a travelling salesman problem. The new heuristic, particle swarm optimization, is then applied to the travelling salesman problem. A model for the approximate prediction of drilling time based on the heuristic solution is presented. The new method requires few control variables: it is versatile, robust and easy to use. In a batch production of a large number of items to be drilled such as in printed circuit boards, the travel time of the drilling device is a significant portion of the overall manufacturing process, hence the new particle swarm optimization–travelling salesman problem heuristic can play a role in reducing production costs.

174 citations


Book ChapterDOI
08 Sep 2004
TL;DR: The notion of privacy of signer's identity is formalized which captures the strong designated verifier property investigated in their paper and a variant of the pairing-based DVS scheme introduced at Asiacrypt'03 by Steinfeld, Bull, Wang and Pieprzyk is proposed.
Abstract: The concept of Designated Verifier Signatures (DVS) was introduced by Jakobsson, Sako and Impagliazzo at Eurocrypt'96. These signatures are intended to a specific verifier, who is the only one able to check their validity. In this context, we formalize the notion of privacy of signer's identity which captures the strong designated verifier property investigated in their paper. We propose a variant of the pairing-based DVS scheme introduced at Asiacrypt'03 by Steinfeld, Bull, Wang and Pieprzyk. Contrary to their proposal, our new scheme can be used with any admissible bilinear map, especially with the low cost pairings and achieves the new anonymity property (in the random oracle model). Moreover, the unforgeability is tightly related to the Gap-Bilinear Diffie-Hellman assumption, in the random oracle model and the signature length is around 75% smaller than the original proposal.

157 citations


Patent
08 Mar 2004
TL;DR: In this paper, a distributed speech recognition system consisting of at least one user terminal comprising means for obtaining an audio signal to be recognized, parameter calculation means and control means which are used to select a signal to transmit; and a server consisting of means for receiving the signal, parameter calculating means, recognition means, control means and the control means used to control the calculation means according to the signal received.
Abstract: This invention relates to a distributed speech recognition system. The inventive system consists of: at least one user terminal comprising means for obtaining an audio signal to be recognized, parameter calculation means and control means which are used to select a signal to be transmitted; and a server comprising means for receiving the signal, parameter calculation means, recognition means and control means which are used to control the calculation means and the recognition means according to the signal received.

136 citations


Patent
27 May 2004
TL;DR: In this paper, the authors proposed a method of transmitting a message with duration of validity destined for a subscriber terminal, comprising the formulating, in the message, of a field containing information regarding duration of validation of the message and the monitoring of the validity of message on the basis of the information contained in said field, wherein the message is transmitted to the terminal of the subscriber and, on the expiry of the duration of a message, the message received by the terminal is modified or deleted in such a way as to prevent consultation thereof.
Abstract: Method of transmitting a message with duration of validity destined for a subscriber terminal, comprising the formulating, in the message, of a field containing information regarding duration of validity of the message and the monitoring of the validity of the message on the basis of the information contained in said field, wherein the message is transmitted to the terminal of the subscriber and, on the expiry of the duration of validity of the message, the message received by the terminal is modified or deleted in such a way as to prevent consultation thereof.

121 citations


Journal ArticleDOI
James Roberts1
16 Aug 2004
TL;DR: An analysis of the statistical nature of IP traffic and the way this impacts the performance of voice, video, and data services is presented and an alternative flow-aware networking architecture based on a novel router design called cross-protect is proposed.
Abstract: Based on an analysis of the statistical nature of IP traffic and the way this impacts the performance of voice, video, and data services, we question the appropriateness of commonly proposed quality-of-service mechanisms. This paper presents the main points of this analysis. We also discuss pricing issues and argue that many proposed schemes are overly concerned with congestion control to the detriment of the primary pricing function of return on investment. Finally, we propose an alternative flow-aware networking architecture based on a novel router design called cross-protect. In this architecture, performance requirements are satisfied without explicit service differentiation, creating a particularly simple platform for the converged network.

Journal ArticleDOI
TL;DR: This model is based on the performance of the M/G/1 processor sharing queue with time-varying capacity and provides new and practical results that allow the definition of the region where the integration is useful and where the desired QoS is satisfied for both streaming and data traffic.

Journal ArticleDOI
Marc Boullé1
TL;DR: This method optimizes the chi-square criterion in a global manner on the whole discretization domain and does not require any stopping criterion, in contrast with related methods ChiMerge and ChiSplit.
Abstract: In supervised machine learning, some algorithms are restricted to discrete data and have to discretize continuous attributes. Many discretization methods, based on statistical criteria, information content, or other specialized criteria, have been studied in the past. In this paper, we propose the discretization method Khiops,1 based on the chi-square statistic. In contrast with related methods ChiMerge and ChiSplit, this method optimizes the chi-square criterion in a global manner on the whole discretization domain and does not require any stopping criterion. A theoretical study followed by experiments demonstrates the robustness and the good predictive performance of the method.

Proceedings ArticleDOI
06 Dec 2004
TL;DR: A serial architecture is proposed, using a drastic anomaly component with a sensitive misuse component, which provides the operator with better qualification of the detection results, raises lower amount of false alarms and unqualified events.
Abstract: Combining an "anomaly" and a "misuse" IDSes offers the advantage of separating the monitored events between normal, intrusive or unqualified classes (ie not known as an attack, but not recognize as safe either) In this article, we provide a framework to systematically reason about the combination of anomaly and misuse components This framework applied to Web servers lead us to propose a serial architecture, using a drastic anomaly component with a sensitive misuse component This architecture provides the operator with better qualification of the detection results, raises lower amount of false alarms and unqualified events

Book ChapterDOI
27 Oct 2004
TL;DR: This article provides a formal definition of multi-designated verifier signatures and gives a rigorous treatment of the security model for such a scheme and proposes a construction based on ring signatures, which meets the definition, but does not achieve the privacy of signer’s identity property.
Abstract: Designated verifier signatures were introduced in the middle of the 90’s by Jakobsson, Sako and Impagliazzo, and independenty patended by Chaum as private signatures. In this setting, a signature can only be verified by a unique and specific user. At Crypto’03, Desmedt suggested the problem of generalizing the designated verifier signatures. In this case, a signature should be intended to a specific set of different verifiers. In this article, we provide a formal definition of multi-designated verifiers signatures and give a rigorous treatment of the security model for such a scheme. We propose a construction based on ring signatures, which meets our definition, but does not achieve the privacy of signer’s identity property. Finally, we propose a very efficient bi-designated verifiers signature scheme based on bilinear maps, which protects the anonymity of signers.

Proceedings ArticleDOI
07 Mar 2004
TL;DR: Both simulations and analytical results show that RuN2C has a very beneficial effect on the delay of short flows, while treating large flows as the current TCP implementation does, and it is found that LAS based mechanisms can lead to pathological behavior in extreme cases.
Abstract: Internet measurements show that a small number of large TCP flows are responsible for the largest amount of data transferred, whereas most of the TCP sessions are made up of few packets. Several authors have invoked this property to suggest the use of scheduling algorithms, which favor short jobs, such as LAS (least attained service), to differentiate between short and long TCP flows. We propose a packet level stateless, threshold based scheduling mechanism for TCP flows, RuN2C. We describe an implementation of this mechanism, which has the advantage of being TCP compatible and progressively deployable. We compare the behavior of RuN2C with LAS based mechanisms through analytical models and simulations. As an analytical model, we use a two level priority processor sharing PS + PS. In the PS + PS system, a connection is classified as high or low priority depending on the amount of service it has obtained. We show that PS + PS reduces the mean response time in comparison with standard processor sharing when the hazard rate of the file size distribution is decreasing. By simulations we study the impact of RuN2C on extreme values of response times and the mean number of connections in the system. Both simulations and analytical results show that RuN2C has a very beneficial effect on the delay of short flows, while treating large flows as the current TCP implementation does. In contrast, we find that LAS based mechanisms can lead to pathological behavior in extreme cases.

Proceedings ArticleDOI
01 Jun 2004
TL;DR: In this article, the authors investigate the flow-level performance in networks with multiple base stations and derive bounds and approximations for key performance metrics like the number of active flows, transfer delays, and flow throughputs in the various cells.
Abstract: The performance of wireless data systems has been extensively studied in the context of a single base station. In the present paper we investigate the flow-level performance in networks with multiple base stations. We specifically examine the complex, dynamic interaction of the number of active flows in the various cells introduced by the strong impact of interference between neighboring base stations. For the downlink data transmissions that we consider, lower service rates caused by increased interference from neighboring base stations result in longer delays and thus a higher number of active flows. This in turn results in a longer duration of interference on surrounding base stations, causing a strong correlation between the activity states of the base stations. Such a system can be modelled as a network of multi-class processor-sharing queues, where the service rates for the various classes at each queue vary over time as governed by the activity state of the other queues. The complex interaction between the various queues renders an exact analysis intractable in general. A simplified network with only one class per queue reduces to a coupled-processors model, for which there are few results, even in the case of two queues. We thus derive bounds and approximations for key performance metrics like the number of active flows, transfer delays, and flow throughputs in the various cells. Importantly, these bounds and approximations are insensitive, yielding simple expressions, that render the detailed statistical characteristics of the system largely irrelevant.

Proceedings ArticleDOI
24 Aug 2004
TL;DR: By associating implicit flow level admission control and per-flow fair queuing in a router it is possible to distinguish streaming and elastic flows and meet their respective quality requirements without requiring specific packet marking.
Abstract: In this paper we present Cross-protect, a combination of router mechanisms allowing quality of service differentiation while maintaining the simple user-network interface of the best effort Internet. By associating implicit flow level admission control and per-flow fair queuing in a router it is possible to distinguish streaming and elastic flows and meet their respective quality requirements without requiring specific packet marking. We describe the implied mechanisms and justify the claimed performance and scalability properties by means of simulation and analysis.

Journal ArticleDOI
James Roberts1
TL;DR: A survey of recent results on the performance of a network handling elastic data traffic under the assumption that flows are generated as a random process highlights the insensitivity results allowing a relatively simple expression of performance when bandwidth sharing realizes so-called "balanced fairness".

01 Jan 2004
TL;DR: Numerical experiments demonstrate that inter-cell scheduling may provide significant capacity gains, the relative contribution from interference avoidance vs. load balancing depending on the configuration and the degree of load imbalance in the network.
Abstract: Over the past few years, the design and performance of channel-aware scheduling strategies have attracted huge interest. In the present paper we examine a somewhat different notion of scheduling, namely coordination of transmissions among base stations, which has received little attention so far. The inter-cell coordination comprises two key elements: (i) interference avoidance; and (ii) load balancing. The interference avoidance involves coordinating the activity phases of interfering base stations so as to increase transmission rates. The load balancing aims at diverting traffic from heavily-loaded cells to lightly-loaded cells. We consider a dynamic scenario where users come and go over time as governed by the arrival and completion of random data transfers, and evaluate the potential capacity gains from inter-cell coordination in terms of the maximum amount of traffic that can be supported for a given spatial traffic pattern. We also show that simple adaptive strategies achieve the maximum capacity without the need for any explicit knowledge of the traffic characteristics. Numerical experiments demonstrate that inter-cell scheduling may provide significant capacity gains, the relative contribution from interference avoidance vs. load balancing depending on the configuration and the degree of load imbalance in the network.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: A psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality found that the detection thresholds are content, duration and motion dependent.
Abstract: Over the past few years there has been an increasing interest in real time video services over packet networks. When considering quality, it is essential to quantify user perception of the received sequence. Severe motion discontinuities are one of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, transmission errors in the packet networks or restitution strategy could be the origin of this perceived jerkiness. In this paper we present a psychovisual experiment performed to quantify the effect of sporadically dropped pictures on the overall perceived quality. First, the perceptual detection thresholds of generated temporal discontinuities were measured. Then, the quality function was estimated in relation to a single frame dropping for different durations. Finally, a set of tests was performed to quantify the effect of several impairments distributed over time. We have found that the detection thresholds are content, duration and motion dependent. The assessment results show how quality is impaired by a single burst of dropped frames in a 10 sec sequence. The effect of several bursts of discarded frames, irregularly distributed over the time is also discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the absence of a regulation institutionnelle de l’articulation entre vies privee et professionnelle, loin dassurer une existence plus equilibree, presente des risques sur l'efficacite collective.
Abstract: Le teletravail est souvent considere comme un moyen d’ameliorer l’articulation entre vie professionnelle et vie privee. Toutefois, pourquoi certains salaries, et singulierement des cadres, en viennent-ils a choisir cette modalite ? S’agit-il d’ailleurs bien d’un choix ou de l’adoption d’une solution « faute de mieux », dans un contexte organisationnel contraignant ? Et dans ce cas, a quels arbitrages temporels procedent-ils ? Par ailleurs, des lors qu’ils ont opte pour le teletravail, comment les cadres parviennent-ils a accommoder les contraintes professionnelles et familiales ? Leurs aspirations a une meilleure conciliation sont-elles satisfaites ou rencontrent-ils des difficultes insoupconnees ? Le present article apporte des reponses a ces differentes questions, en s’appuyant sur deux etudes menees aupres de cadres pratiquant differentes formes de teletravail. Il souligne, finalement, que l’absence d’une regulation institutionnelle de l’articulation entre vies privee et professionnelle, loin d’assurer une existence plus equilibree, presente des risques sur l’efficacite collective.

Proceedings ArticleDOI
17 May 2004
TL;DR: A new method, called the two-step noise reduction (TSNR) technique, is proposed, which solves the problem of single microphone speech enhancement in noisy environments while maintaining the benefits of the decision-directed approach.
Abstract: The paper addresses the problem of single microphone speech enhancement in noisy environments Common short-time noise reduction techniques proposed in the art are expressed as a spectral gain depending on the a priori SNR In the well-known decision-directed approach, the a priori SNR depends on the speech spectrum estimation in the previous frame As a consequence, the gain function matches the previous frame rather than the current one which degrades the noise reduction performance We propose a new method, called the two-step noise reduction (TSNR) technique, which solves this problem while maintaining the benefits of the decision-directed approach This method is analyzed and results in voice communication and speech recognition contexts are given

Proceedings ArticleDOI
07 Mar 2004
TL;DR: This paper examines how slower, mobility-induced rate variations impact performance at flow level, accounting for the random number of flows sharing the transmission resource.
Abstract: The potential for exploiting rate variations to increase the capacity of wireless systems by opportunistic scheduling has been extensively studied at packet level. In the present paper, we examine how slower, mobility-induced rate variations impact performance at flow level, accounting for the random number of flows sharing the transmission resource. We identify two limit regimes, termed fluid and quasistationary, where the rate variations occur on an infinitely fast and an infinitely slow time scale, respectively. Using stochastic comparison techniques, we show that these limit regimes provide simple performance bounds that only depend on easily calculated load factors. Additionally, we prove that for a broad class of fading processes, performance varies monotically with the speed of the rate variations. These results are illustrated through numerical experiments, showing that the fluid and quasistationary bounds are remarkably tight in certain usual cases

Proceedings ArticleDOI
01 Jun 2004
TL;DR: This paper gives performance bounds for both elastic and streaming traffic by means of sample-path arguments, and presents the practical interest of being insensitive to traffic characteristics like the distributions of elastic flow size and streaming flow duration.
Abstract: We consider a network model where bandwidth is fairly shared by a dynamic number of elastic and adaptive streaming flows. Elastic flows correspond to data transfers while adaptive streaming flows correspond to audio/video applications with variable rate codecs. In particular, the former are characterized by a fixed size (in bits) while the latter are characterized by a fixed duration. This flow-level model turns out to be intractable in general. In this paper, we give performance bounds for both elastic and streaming traffic by means of sample-path arguments. These bounds present the practical interest of being insensitive to traffic characteristics like the distributions of elastic flow size and streaming flow duration.

Patent
21 Dec 2004
TL;DR: In this article, the authors propose a system consisting of a session protocol server (S-CSCF) and a mobility server, where the mobility server is used to provide mobile dependent evaluation reports providing an indication of a current state for communicating with the user equipment and to form the mobility management information based on the evaluation reports.
Abstract: A communications system is arranged to provide a service to user equipment in accordance with mobility management information. The system comprises a session protocol server (S-CSCF) operable to control the state of a communications session for at least one user equipment in accordance with user profile data, a subscriber information database (HSS) for providing the user profile data for the session protocol server (S-CSCF), and a mobility server. The mobility server comprises a mobility manager operable to receive mobile dependent evaluation reports providing an indication of a current state for communicating with the user equipment and to form the mobility management information based on the evaluation reports. The mobility server includes an application programmer's interface operable to communicate call control signalling data between the mobility manager and the session protocol server (S-CSCF). The mobility manager is operable to notify the application program providing the service to the user equipment of the mobility management information in response to a subscription for the information from the application program, the subscription being provided via the session protocol server (S-CSCF) using the call control signalling data. By integrating the mobility server within the system, mobility management information provided by the mobility server can be integrated with other services provided by the system. As such, mobile users deploying application programs within the system, which subscribe to the mobility server, can benefit from added value provided by established system components and re-using established interfaces.

Journal ArticleDOI
TL;DR: It is shown notably that the performance of balanced fairness is always better than that obtained if flows are transmitted in a "store and forward" fashion, allowing simple formula applying to the latter to be used as a conservative evaluation for network design and provisioning purposes.

Proceedings ArticleDOI
01 Jun 2004
TL;DR: This paper identifies the class of load balancing policies which preserve insensitivity and characterize optimal strategies in some specific cases and addresses the issue of dynamic load balancing.
Abstract: A large variety of communication systems, including telephone and data networks, can be represented by so-called Whittle networks. The stationary distribution of these networks is insensitive, depending on the service requirements at each node through their mean only. These models are of considerable practical interest as derived engineering rules are robust to the evolution of traffic characteristics. In this paper we relax the usual assumption of static routing and address the issue of dynamic load balancing. Specifically, we identify the class of load balancing policies which preserve insensitivity and characterize optimal strategies in some specific cases. Analytical results are illustrated numerically on a number of toy network examples.

Proceedings ArticleDOI
17 May 2004
TL;DR: The paper presents a novel scalable audio coding scheme where the bitrates vary continuously between a minimal and a maximal value, allowing free modification of the bitrate.
Abstract: Networks are getting more and more heterogeneous. Scalable codecs are especially suited for such a context as they permit the bitrate to be lowered in a simple way, at any point of the transmission, for adaptation to network conditions and to terminal capacities. Classically, scalable codecs are organised in layers and scalability is obtained by sending more or fewer layers to the decoder. The obtained granularity depends on the layer sizes, and the available bitrates are fixed and limited in number. The paper presents a novel scalable audio coding scheme where the bitrates vary continuously between a minimal and a maximal value, allowing free modification of the bitrate. With this novel approach, all bitrates are valid, sending even one more bit results in different output signal with statistically growing quality. Test results show that this method provides quality as good as or even better than that of a non-scalable version.

Journal ArticleDOI
TL;DR: In this paper, a dosimetric analysis of a specific exposure system used to locally expose the heads of rats was performed using the finite-difference time-domain method using a homogeneous rat model.
Abstract: This paper describes a dosimetric analysis of a specific exposure system used to locally expose the heads of rats. This system operating at 900 MHz consists of a restrainer and a loop antenna having two metallic lines printed on a dielectric substrate; one of the extremities of the metallic structure forms a loop and is placed close to the head of the animal placed in a cylindrical "rocket-like" restrainer. This local-exposure system was analyzed using the finite-difference time-domain method. Comparisons between measurements and simulations were carried out using a homogeneous rat model. The aim of the study was to compare the exposure of the rat and human head in specific tissues such as the dura matter (DM). Specific absorption rate (SAR) levels were calculated in a heterogeneous rat phantom. With an input power of 1 W, the brain-averaged SAR was equal to 6.8 W/kg. Using a statistical approach, the maximum SAR in specific tissues was extracted. With an input power of 1 W, the maximum SAR inside the skull was estimated at 15.5/spl plusmn/5 W/kg, while the maximum SAR in the skin was 33/spl plusmn/5 W/kg. A comparison was made between SAR levels in a human head exposed to a global system for mobile communication handset operating at 900 MHz (250-mW output power) and those obtained in the rat tissues with a 100-mW input power at the connector of the loop. In this case, simulations showed that the ratio of the maximum local SAR in the rat versus human was 1.3/spl plusmn/0.6 in the brain, 1.0/spl plusmn/0.5 in the skin, and 1.2/spl plusmn/0.6 in the DM.