scispace - formally typeset
Search or ask a question

Showing papers by "Orange S.A. published in 2014"


Journal ArticleDOI
TL;DR: The design and evaluation of an adaptive cooperative scheme intended to extend the survivability of the battery-operated aerial-terrestrial communication links are discussed and simulation analysis corroborates that the adaptive transmission technique improves overall energy efficiency of the network whilst maintaining low latency, enabling real-time applications.
Abstract: Hybrid aerial-terrestrial communication networks based on low-altitude platforms are expected to meet optimally the urgent communication needs of emergency relief and recovery operations for tackling large-scale natural disasters. The energy-efficient operation of such networks is important given that the entire network infrastructure, including the battery-operated ground terminals, exhibits requirements to operate under power-constrained situations. In this paper, we discuss the design and evaluation of an adaptive cooperative scheme intended to extend the survivability of the battery-operated aerial-terrestrial communication links. We propose and evaluate a real-time adaptive cooperative transmission strategy for dynamic selection between direct and cooperative links based on the channel conditions for improved energy efficiency. We show that the cooperation between mobile terrestrial terminals on the ground could improve energy efficiency in the uplink, depending on the temporal behavior of the terrestrial and aerial uplink channels. The corresponding delay in having cooperative (relay-based) communications with relay selection is also addressed. The simulation analysis corroborates that the adaptive transmission technique improves overall energy efficiency of the network whilst maintaining low latency, enabling real-time applications.

94 citations


Journal ArticleDOI
TL;DR: A new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems proves the capability of the method to deal with both melting and solidification problems with convection.

69 citations


Journal ArticleDOI
TL;DR: A general design of the coplanar waveguide excited IIFA, traditionally excited by a Microstrip or coaxial excitation, makes also the antenna more suitable to body area network (BAN) context by facilitating the integration on textile fabric or clothes, and the limitation of body effect at the operation frequency.
Abstract: A general design of the coplanar waveguide excited IIFA, traditionally excited by a Microstrip or coaxial excitation is presented. This coplanar excitation permits the bandwidth improvement (from around 10% to 50%) by the creation of a new resonance on the ground plane. It makes also the antenna more suitable to body area network (BAN) context by facilitating the integration on textile fabric or clothes, and the limitation of body effect at the operation frequency. The antenna principle is analyzed and general designs are presented. As application; a solution of a meandered IIFA integrated on Denim substrate is realized. All results are validated through comparative simulations and measurements.

56 citations


Journal ArticleDOI
TL;DR: This paper focuses on the comparison between two fusion methods, namely early fusion and late fusion, and demonstrates that the systems with the proposed multilayer fusion methods at kernel level perform more stably to reach the goal than the classification-score-level fusion.
Abstract: This paper focuses on the comparison between two fusion methods, namely early fusion and late fusion. The former fusion is carried out at kernel level, also known as multiple kernel learning, and in the latter, the modalities are fused through logistic regression at classifier score level. Two kinds of multilayer fusion structures, differing in the quantities of feature/kernel groups in a lower fusion layer, are constructed for early and late fusion systems, respectively. The goal of these fusion methods is to put each of various features into effect and mine redundant information of the combination of them, and then to develop a generic and robust semantic indexing system to bridge semantic gap between human concepts and these low-level visual features. Performance evaluated on both TRECVID2009 and TRECVID2010 datasets demonstrates that the systems with our proposed multilayer fusion methods at kernel level perform more stably to reach the goal than the classification-score-level fusion; the most effective and robust one with highest MAP score is constructed by early fusion with two-layer equally weighted composite kernel learning.

41 citations


Proceedings ArticleDOI
12 May 2014
TL;DR: A global mean user throughput in the cellular network is defined and it is proved that it is equal to the ratio of mean traffic demand to the mean number of users in the steady state of the “typical cell” of the network.
Abstract: We assume a space-time Poisson process of call arrivals on the infinite plane, independently marked by data volumes and served by a cellular network modeled by an infinite ergodic point process of base stations. Each point of this point process represents the location of a base station that applies a processor sharing policy to serve users arriving in its vicinity, modeled by the Voronoi cell, possibly perturbed by some random signal propagation effects. User service rates depend on their signal-to-interference-and-noise ratios with respect to the serving station. Little's law allows to express the mean user throughput in any region of this network model as the ratio of the mean traffic demand to the steady-state mean number of users in this region. Using ergodic arguments and the Palm theoretic formalism, we define a global mean user throughput in the cellular network and prove that it is equal to the ratio of mean traffic demand to the mean number of users in the steady state of the “typical cell” of the network. Here, both means account for double averaging: over time and network geometry, and can be related to the per-surface traffic demand, base-station density and the spatial distribution of the signal-to-interference-and-noise ratio. This latter accounts for network irregularities, shadowing and cell dependence via some cell-load equations. Inspired by the analysis of the typical cell, we propose also a simpler, approximate, but fully analytic approach, called the mean cell approach. The key quantity explicitly calculated in this approach is the cell load. In analogy to the load factor of the (classical) M/G/1 processor sharing queue, it characterizes the stability condition, mean number of users and the mean user throughput. We validate our approach comparing analytical and simulation results for Poisson network model to real-network measurements.

32 citations


Journal ArticleDOI
TL;DR: A number of critical issues for dual-polarization single- and multi-band optical orthogonal-frequency division multiplexing (DP-SB/MB-OFDM) signals are analyzed in dispersion compensation fiber (DCF)-free long-haul links and the maximum transmission-reach is investigated.
Abstract: A number of critical issues for dual-polarization single- and multi-band optical orthogonal-frequency division multiplexing (DP-SB/MB-OFDM) signals are analyzed in dispersion compensation fiber (DCF)-free long-haul links. For the first time, different DP crosstalk removal techniques are compared, the maximum transmission-reach is investigated, and the impact of subcarrier number and high-level modulation formats are explored thoroughly. It is shown, for a bit-error-rate (BER) of 10(-3), 2000 km of quaternary phase-shift keying (QPSK) DP-MB-OFDM transmission is feasible. At high launched optical powers (LOP), maximum-likelihood decoding can extend the LOP of 40 Gb/s QPSK DP-SB-OFDM at 2000 km by 1.5 dB compared to zero-forcing. For a 100 Gb/s DP-MB-OFDM system, a high number of subcarriers contribute to improved BER but at the cost of digital signal processing computational complexity, whilst by adapting the cyclic prefix length the BER can be improved for a low number of subcarriers. In addition, when 16-quadrature amplitude modulation (16QAM) is employed the digital-to-analogue/analogue-to-digital converter (DAC/ADC) bandwidth is relaxed with a degraded BER; while the 'circular' 8QAM is slightly superior to its 'rectangular' form. Finally, the transmission of wavelength-division multiplexing DP-MB-OFDM and single-carrier DP-QPSK is experimentally compared for up to 500 Gb/s showing great potential and similar performance at 1000 km DCF-free G.652 line.

24 citations


Proceedings Article
14 May 2014
TL;DR: Five network scenarios providing technical solutions to FMC use cases targeting an optimal and seamless quality of experience for the end user together with an optimized network infrastructure ensuring increased performance, flexibility, reduced cost and reduced energy consumption are proposed.
Abstract: The drivers of Fixed and Mobile Convergence (FMC) are discussed. A reference framework for FMC proposed by European project COMBO is then presented. Some use cases of FMC are described, showing the needs for mutualization and convergence of fixed and mobile broadband networks. Five network scenarios providing technical solutions to FMC use cases are proposed. They target an optimal and seamless quality of experience for the end user together with an optimized network infrastructure ensuring increased performance, flexibility, reduced cost and reduced energy consumption.

21 citations


Journal ArticleDOI
TL;DR: This paper proposes a distributed framework that uses Network Utility Maximization (NUM) to optimize the following joint objectives: flow control, routing, scheduling, and relay assignment; for multi-hop wireless cooperative networks with general flow and cooperative relay patterns.
Abstract: Cooperative communication has been shown to have great potential in improving wireless link quality. Incorporating cooperative communications in multi-hop wireless networks has been attracting a growing interest. However, most current research focuses on either centralized solutions or schemes limited to specific network problems. In this paper, we propose a distributed framework that uses Network Utility Maximization (NUM) to optimize the following joint objectives: flow control, routing, scheduling, and relay assignment; for multi-hop wireless cooperative networks with general flow and cooperative relay patterns. We define two special graphs, Hyper Forwarding Graphs (HFG) and Hyper Conflict Graphs (HCG), to represent all possible cooperative routing policies and interference relations among the cooperative relays respectively. Based on HFG and HCG, a stochastic mixed-integer non-linear programming problem is formulated. We then propose lightweight algorithms to solve these in a fully distributed manner, and derive the theoretical performance bounds of these proposed algorithms. Simulation results verify our theoretical analysis and reveal the significant performance gains of our framework, in terms of throughput, flexibility, and scalability. To our knowledge, this is the first distributed cross-layer optimization framework for multi-hop wireless cooperative networks with general flow and cooperative relay patterns.

19 citations


Journal ArticleDOI
TL;DR: This paper analyzes stability in multi-skill workforce assignments of technicians and jobs and proposes an integer programming (IP) model to construct optimal stable assignments with several objectives.
Abstract: This paper analyzes stability in multi-skill workforce assignments of technicians and jobs. In our stability analysis, we extend the notion of blocking pairs as stated in the Marriage model of Gale-Shapley to the multi-skill workforce assignment. It is shown that finding stable assignments is NP-hard. A special case turns out to be solvable in polynomial time. For the general case, we give a characterization of the set of stable assignments by means of linear inequalities involving binary variables. We propose an integer programming (IP) model to construct optimal stable assignments with several objectives. In the computational results, we observe that it is easier to attain stability in instances with easy jobs and we consider a range of instances to show how fast the solution time increases. Open questions and further directions are discussed in the conclusion section.

17 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: This work proposes and compares three machine learning-based methods for embedded Arabic text detection that are able to detect Arabic text regions without any prior knowledge and without any pre-processing.
Abstract: Text detection in videos is a primary step in any semantic-based video analysis systems. In this work, we propose and compare three machine learning-based methods for embedded Arabic text detection. These methods are able to detect Arabic text regions without any prior knowledge and without any pre-processing. The first method relies on a convolution neural network. The two other methods are based on a multi-exit asymmetric boosting cascade. The proposed methods have been extensively evaluated on a large database of Arabic TV channel videos. Experiments highlight a good detection rate of all methods even though neural network-based method outperforms the other ones in terms of recall/precision and computation time.

16 citations


Proceedings ArticleDOI
06 Jul 2014
TL;DR: In this paper, the authors present the main concept and technology solutions envisioned by the EU funded project FOX-C, which targets the design, development and evaluation of the first functional system prototype of flexible add-drop and switching cross-connects.
Abstract: Flexible optical networking is identified today as the solution that offers smooth system upgradability towards Tb/s capacities and optimized use of network resources. However, in order to fully exploit the potentials of flexible spectrum allocation and networking, the development of a flexible switching node is required capable to adaptively add, drop and switch tributaries with variable bandwidth characteristics from/to ultra-high capacity wavelength channels at the lowest switching granularity. This paper presents the main concept and technology solutions envisioned by the EU funded project FOX-C, which targets the design, development and evaluation of the first functional system prototype of flexible add-drop and switching cross-connects. The key developments enable ultra-fine switching granularity at the optical subcarrier level, providing end-to-end routing of any tributary channel with flexible bandwidth down to 10Gb/s (or even lower) carried over wavelength superchannels, each with an aggregated capacity beyond 1Tb/s. © 2014 IEEE.

Proceedings ArticleDOI
14 Sep 2014
TL;DR: This paper presents the strategy followed by the PERCOL team for speaker identification based on enriching the speaker diarization with features related to the ”understanding” of the video scenes: text overlay transcription and analysis, automatic situation identification, the amount of people visible, TV set disposition and even the camera when available.
Abstract: This paper describes a multi-modal person recognition system for video broadcast developed for participating in the DefiRepere challenge. The main track of this challenge targets the identification of all persons occurring in a video either in the audio modality (speakers) or the image modality (faces). This system is developed by the PERCOL team involving 4 research labs in France and was ranked first at the 2014 Defi-Repere challenge. The main scientific issue addressed by this challenge is the combination of audio and video information extraction processes for improving the extraction performance in both modalities. In this paper, we present the strategy followed by the PERCOL team for speaker identification based on enriching the speaker diarization with features related to the ”understanding” of the video scenes: text overlay transcription and analysis, automatic situation identification (TV set, report), the amount of people visible, TV set disposition and even the camera when available. Experiments on the REPERE corpus show interesting results on the speaker identification system enriched by the scene understanding features and the usefulness of the speaker to identify faces.

Journal ArticleDOI
TL;DR: In this article, a multi-layer next generation PON prototype has been built and tested, to show the feasibility of extended hybrid DWDM/TDM-XGPON FTTH networks with resilient optically-integrated ring-trees architecture, supporting broadband multimedia services.

Journal ArticleDOI
TL;DR: In this article, an adaptive cooperative transmission strategy for dynamic selection between direct and cooperative links based on the channel conditions for improved energy efficiency was proposed and evaluated for hybrid UAV-LAPs.
Abstract: Hybrid aerial-terrestrial communication networks based on Low Altitude Platforms (LAPs) are expected to optimally meet the urgent communication needs of emergency relief and recovery operations for tackling large scale natural disasters. The energy-efficient operation of such networks is important given the fact that the entire network infrastructure, including the battery operated ground terminals, exhibits requirements to operate under power-constrained situations. In this paper, we discuss the design and evaluation of an adaptive cooperative scheme intended to extend the survivability of the battery operated aerial-terrestrial communication links. We propose and evaluate a real-time adaptive cooperative transmission strategy for dynamic selection between direct and cooperative links based on the channel conditions for improved energy efficiency. We show that the cooperation between mobile terrestrial terminals on the ground could improve the energy efficiency in the uplink depending on the temporal behavior of the terrestrial and the aerial uplink channels. The corresponding delay in having cooperative (relay-based) communications with relay selection is also addressed. The simulation analysis corroborates that the adaptive transmission technique improves the overall energy efficiency of the network whilst maintaining low latency enabling real time applications.

Journal ArticleDOI
TL;DR: In this paper, the authors analyze the effect of a margin squeeze ban on a vertically integrated monopolist that faces competition by an unintegrated downstream competitor, and show that for differentiated goods in the downstream market, margin squeeze can be observed as the competitive outcome rather than exclusionary conduct.
Abstract: This paper analyses the effects of banning pricing policies that lead to margin squeezes when the upstream good is imperfectly regulated. The analysis relies on a modelling with a vertically integrated upstream monopolist that faces competition by an unintegrated downstream competitor. It shows that for differentiated goods in the downstream market, a margin squeeze can be observed as the competitive outcome rather than exclusionary conduct. If upstream market regulation is non-constraining, a margin squeeze ban induces the vertically integrated firm to increase its own downstream price (this is, a price umbrella), but also to review its upstream pricing behavior and reduce the upstream price charged to the retail competitor. This "decreasing rivals' costs effect" (DRC-effect) allows the integrated firm to maximise its profits given the constraint on the downstream price, and allows the downstream competitor to set a lower retail price. However, when constraining upstream regulation and a ban are implemented jointly, the DRC-effect vanishes and downstream prices may to rise, leading to a decrease of consumer surplus. This analysis tends to back up the American way of handling margin squeezes in a regulated environment.

Proceedings ArticleDOI
01 Nov 2014
TL;DR: This paper proposes to extend a flat IPv6 mobility management architecture with a new functional block, namely LIMME (Location & Infrastructure Mobility Management Entity), composed of a Location Manager acting as location anchor point for cloud-based services and an Infrastructure Node selector, which selects a route based on geographical data and local infrastructure node conditions.
Abstract: In this paper, we provide the specification of a cloud-initiated Point-of-Interest (PoI) application, and illustrate its requirements for a convergence between IPv6 mobility management and Dedicated Short Range Communications (DSRC) geographic services. We propose to extend a flat IPv6 mobility management architecture with a new functional block, namely LIMME (Location & Infrastructure Mobility Management Entity), composed of three key functions: a Location Manager (LM) acting as location anchor point for cloud-based services, a Geographic Mobility Management (GMM) function acting as location proxy for the LM and handling IPv6 mobility, and an Infrastructure Node selector, which selects a route based on geographical data and local infrastructure node conditions. As a proof-of-concept, we implemented these extensions on the iTETRIS ITS simulation platform and illustrated their benefits in enhanced IPv6 mobility management and traffic offloading.

Proceedings Article
01 Jan 2014
TL;DR: In this article, the exposure index (EI) is defined to evaluate the average global exposure of the population in a specific geographical area, based on radio-planning predictions, realistic population statistics, user traffic data and Specific Absorption Rate (SAR) calculations.
Abstract: The EU project LEXNET is defining a new metric to evaluate the exposure induced by a wireless network communication at a whole. Exposure induced by base station antennas but also exposure induced by wireless devices are taken into account to evaluate the average global exposure of the population in a specific geographical area. The paper first explains the concept and gives the formulation of the Exposure Index (EI). Then the EI computation is illustrated, based on simulation, showing how radio-planning predictions, realistic population statistics, user traffic data and Specific Absorption Rate (SAR) calculations can be combined to assess the index.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the time variations of the total electron content in the South East Asian equatorial ionization anomaly through the period 2006-2011 by using a latitudinal chain of GPS stations extending in the northern and southern hemisphere.

Proceedings ArticleDOI
04 May 2014
TL;DR: This work proposes an alternative solution that splits the temporal context into blocks, each learned with a separate deep model, and demonstrates that this approach significantly reduces the number of parameters compared to the classical deep learning procedure, and obtains better results on the TIMIT dataset.
Abstract: This paper follows the recent advances in speech recognition which recommend replacing the standard hybrid GMM/HMM approach by deep neural architectures. These models were shown to drastically improve recognition performances, due to their ability to capture the underlying structure of data. However, they remain particularly complex since the entire temporal context of a given phoneme is learned with a single model, which must therefore have a very large number of trainable weights. This work proposes an alternative solution that splits the temporal context into blocks, each learned with a separate deep model. We demonstrate that this approach significantly reduces the number of parameters compared to the classical deep learning procedure, and obtains better results on the TIMIT dataset, among the best of state-of-the-art (with a 20:20% PER). We also show that our approach is able to assimilate data of different nature, ranging from wide to narrow bandwidth signals.

Posted Content
TL;DR: It is proposed to update the definition of the "Electronic Communications Services" deleting the "conveyance of signals" criterion and limiting ECS to access services to clarify the classification of ICT services ensuring the same level of consumer protection.
Abstract: A fully functioning ICT Single Market in the European Union requires a level playing field between all the actors in the Internet value chain. This is not currently the case as telecommunication operators face more stringent rules than OTTs for the provision of their services. This imbalance does not provide the same level of protection for customers, does not provide the same guarantees for governments and generates competition distortion, hence it has to be overcome. To achieve this purpose we propose to update the definition of the "Electronic Communications Services" deleting the "conveyance of signals" criterion and limiting ECS to access services. Such change would clarify the classification of ICT services ensuring the same level of consumer protection since substitutable services would be submitted to the same regulatory regime. This paper also presents a proposal for further reviewing the Telecom Package transferring obligations from the sector-specific to the cross-sector framework and underlines the need for efficient law enforcement and taxation. Considering the extent of the proposed changes, this paper is to be considered as a basis for further analysis.

Proceedings ArticleDOI
14 Sep 2014
TL;DR: The aim is to integrate speaker information and lexical information within a single cohesion value based on a lexical cohesion system and an approach that directly integrates the speaker distribution when processing the cohesion.
Abstract: In this paper, we introduce the notion of speech cohesion for topic segmentation of a spoken content. The aim is to integrate speaker information and lexical information within a single cohesion value. Based on a lexical cohesion system, we propose an approach that directly integrates the speaker distribution when processing the cohesion. A potential boundary is effective if the joint distribution of terms and speakers is different enough from one side of the boundary to the other. Beyond speaker distribution, we also propose to take into account speaker identification and to confront speaker identities to identities mentioned in the spoken content in order to reinforce cohesion of a topic segment. Experiments run on three corpora of various Broadcasts News formats collected from 9 French TV channels, show a significant improvement in the overall topic segmentation process.

Book ChapterDOI
01 Jan 2014
TL;DR: A criterion that evaluates a given discretization of such variables located in a non target table is proposed and a simple optimization algorithm is described to find the best equal frequency discretized with respect to the proposed criterion.
Abstract: In Multi-RelationalDataMining (MRDM), data are represented in a relational form where the individuals of the target table are potentially related to several records in secondary tables in one-to-many relationship. Variable pre-processing (including discretization and feature selection) within this multiple table setting differs from the attribute-value case. Besides the target variable information, one should take into account the relational structure of the database. In this paper, we focus on numerical variables located in a non target table. We propose a criterion that evaluates a given discretization of such variables. The idea is to summarize for each individual the information contained in the secondary variable by a feature tuple (one feature per interval of the considered discretization). Each feature represents the number of values of the secondary variable ranging in the corresponding interval. These count features are jointly partitioned by means of data grid models in order to obtain the best separation of the class values. We describe a simple optimization algorithm to find the best equal frequency discretization with respect to the proposed criterion. Experiments on a real and artificial data sets reveal that the discretization approach helps one to discover relevant secondary variables.

Journal ArticleDOI
TL;DR: A novel unsupervised visual reranking is proposed, termed rank via the convolutional neural networks (RankCNN), which integrates deep learning with pseudo preference feedback.

Proceedings ArticleDOI
04 Dec 2014
TL;DR: This paper presents some of the major modifications that have been brought in the 3GPP 3D MIMO Channel and antenna models, and presents two RSRP computation models that account for the effects of SSPs and evaluate their performance through system-level simulations.
Abstract: Recently, the 3GPP RAN WG1 (RAN1) started working on a Study Item that focuses on introducing 3D MIMO Channel Models for system-level performance evaluation of 3D MIMO and Full- Dimension (FD) MIMO features in LTE-A. In this Study Item, simulation scenarios and environments have been defined and agreed upon for the extension of the current 3GPP/ITU Channel Model. In this paper, we present some of the major modifications that have been brought in the 3GPP 3D MIMO Channel and antenna models. We also present the problem of computing the Reference Signal Received Power (RSRP) by accounting for the Small Scale Parameters (SSPs), which relates closely to the problem of determining the serving cell. We present two RSRP computation models that account for the effects of SSPs and evaluate their performance through system-level simulations. We observe that the behaviour of the 3D MIMO Channel Model is significantly impacted by how the RSRP is computed. We provide some preliminary results on the 3D MIMO channel and investigate the different parameters impacting the 3D MIMO channel's behaviour.

Book ChapterDOI
08 Dec 2014
TL;DR: A novel modeling for summary creation using constraint satisfaction programming (CSP) is proposed, which allows users to easily modify the expected summary depending on their preferences or the video type.
Abstract: This paper focuses on automatic video summarization. We propose a novel modeling for summary creation using constraint satisfaction programming (CSP). The proposed modeling aims to provide the summarization method with more flexibility. It allows users to easily modify the expected summary depending on their preferences or the video type. Using this new modeling, constraints become easier to formulate. Moreover, the CSP solver explores more efficiently the search space. It provides more quickly better solutions. Our model is evaluated and compared with an existing modeling on tennis videos.

Proceedings ArticleDOI
24 Nov 2014
TL;DR: The interferometric technique demonstrated enables a fully flexible node, implementing the extraction, drop and addition of individual sub-channel within an all-optical OFDM superchannel.
Abstract: We present the first experimental implementation of an all-optical ROADM scheme for routing of individual channels within an all-optical OFDM superchannel. The interferometric technique demonstrated enables a fully flexible node, implementing the extraction, drop and addition of individual sub-channel.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that market dominance is not illegal by itself, however, the economic doctrine of the European Commission's doctrine regards the exercise of market power as economically inefficient, i.e. charging supracompetitive prices, lead to inefficient market outcomes.
Abstract: 1. Although market dominance is not illegal in the European Union, European Commission’s doctrine regards exercise of market power as economically inefficient. Its economic policy is meant to push markets towards perfect competition, but ignores that investments required for dynamic efficiency are financed by the profits they create. Although under the European law market power is not illegal by itself, the economic doctrine of the Commission considers, however, that the exercise of market power, i.e. charging supra-competitive prices, lead to inefficient market outcomes. The economic policy of the European Union is a competition policy which aims to make markets tend towards a perfectly competitive frame, where profit margins are eliminated and prices tend towards marginal costs. The Commission monitors and controls market structures to ensure that competition drives growth by selecting the most efficient companies and sectors. The Commission regards competition as the major driver of competitiveness and growth provided it is supported by competition policy which makes markets efficient under the criteria of a static economic analysis. Its purpose is to raise competition intensity in the intermediate markets to allow producers of final goods to benefit from lower input price to improve their efficiency. In addition, it aims at promoting the mobility of factors of production, transferring them from the less efficient to the most efficient and productive sectors. The Commission argues that under the guidance of competition authorities, competition leads to cost efficiency, raises the amount of resources available to leading sectors and at the same time promotes investment through the “escape from competition” effect. This doctrine has, however, important shortcomings. It ignores the fact that lower profits can hamper investment and that companies with negative expectations on profitability will not invest. The European Commission is unclear about whether competition should be seen as a process or a steady state. When politically advocating its competition policy, the European Commission depicts competition as an evolutionary dynamic that promotes efficiency, investment and innovation. However, when actually implementing its policy, the European Commission aims to make markets tend towards a steady state of maximum level of static competitive intensity (with no technological progress) by eliminating market power and ensuring the most perfect competitive frame possible. The pursuit of the maximum level of static competitive intensity might then deter investment, which is the driver of dynamic efficiency, and eventually economic growth. 2. The European competition authorities’ policy is to prevent exercise of market power whereas the purpose of US competition authorities is to maintain undertakings’ incentives to invest in order to gain market power. The European Commission intervenes ex-ante through policies promoting market entry in order to prevent the formation of a market power likely to be exercised and relies on antitrust action to remove it ex-post. In their practical approach, the competition authorities ban mergers that bring market power arguing that intermediate and final consumers would face higher prices and lower innovation. They approve consolidations provided merged companies commit to transfer productive assets to direct competitors. They consider that temporary rents from investment and innovation efforts distort competition because they grant dominant positions and thus have to be tackled by the enactment of competition law. As the practical approach of competition authorities is to reach the maximum level of static competitive intensity, (where prices equal marginal production costs), their focus is on the upward price pressures that mergers would trigger in the short term. The European competition authorities do not spontaneously consider the positive effects on investment and efficiency that could stem from a merger. They are sceptical about the arguments put forward by companies in support of these effects. Therefore, when evaluating mergers, the authorities do not consider the value that corporate investment in quality and quantity (stemming from higher expected profitability) can bring to the consumer. The same reasoning is applied to the analysis of abuse of dominant position. The appraisal of market dominance and of the exercise of market power by the European authorities might hamper the incentives of private companies to invest and innovate. The US competition authorities apply a different antitrust policy with regards to maintaining a competitive market structure. Contrary to the European competition authorities they do not consider that the dominant firm is liable for the competitive market structure or responsible for maintaining its competitors on the market. They give priority to returns on investment and incentives to invest over forcing companies to share their assets with their competitors to preserve static competition. They thus favour the growth of market players (hence to market power) over maintaining a perfectly competitive market structure. The US competition authorities and policymakers consider market power in the form of mark-ups over competitive prices both a condition for returns on prior investment and a condition of future investments. As a result, they are more likely to foster incentives to invest and innovate, as investors do not necessarily expect both their assets and their returns to be transferred to competitors. In Europe, the willingness of competition authorities to eliminate profit margins, to limit capital intensity favored by mergers, and to ban large companies from acquiring competitive advantages can deprive these companies of prospects which motivate their investments, as expectations regarding profitability become negative. This has an overall deterrent effect on investment and innovation, and in turn undermines economic growth in the European Union. 3. The European Commission acknowledges investment as a driver of macroeconomic growth undermined by poor profitability but ignores this point concerning the provision of intermediary goods by high technology industries in the internal market. The European Commission acknowledges that the European Union’s macroeconomic weaknesses, although worsened by the financial crisis, have structural causes. The European Union has slower productivity growth than the United States, especially in high-tech sectors, and a weaker industrial sector. According to the Commission, Europe has been losing competitiveness because of high labour costs and companies’ difficulties in specialising in fast growing sectors, exporting their goods and services, and accessing sources of funding. To restore competitiveness, productivity and growth in the Union, the Commission recommends strengthening high productivity industries and companies exposed to international competition. It posits that raising the share of the manufacturing sector in the aggregate added value should raise productivity gains in the global economy. The policy of the European Union consists in fostering transfers of inputs from the non-tradable sector (mainly services) to the tradable sector (industry). Structural reforms are thus designed to reduce labour costs through tax shifts and to decrease the prices of intermediate inputs in market services (sheltered from international competition) through stronger competitive pressure. The Commissions posits that exporting industrial companies would then be able to restore their profit margins and therefore the investment capacities needed to bring technological progress to the internal market. The Commission thereby recognises both the need for profit margins to finance current corporate investment and the need for sufficient expected returns to commit to new investments. However, it still enforces the ban on the exercise of market power as the essential component of its competition policy, relying on the “escape-competition effect” to foster investment when competition is intense. The Commission thereby ignores that investment in market services can be slowed down due to negative expectations on revenues and profit margins which will limit the capacity of companies to finance themselves. The Commission suggests that increased competition in the banking sector might raise the credit supply and overcome the shortage of internal resources, but, by doing so, the Commission seems to confuse financing issues and profitability issues. For the Commission, internal resources to finance investment will not come from competitive advantages (as fair reward from investment) but from lower input costs due to increased competitive pressure. 4. Economic growth results from improved productivity due to investments incorporating technical progress in the production system. Investments decisions by market players are subject to expected profits and cannot be achieved when competition intensity exceeds its optimal threshold. A growth-supportive competition policy should adjust competitive intensities to maximise the contribution investments in each industry to maximise the contribution investments in each industry provide to productivity and growth. The European competition authorities advocate monitoring markets to ensure that an evolutionary process leads companies to invest and innovate. But in the Commission’s doctrine, technological progress is regarded as exogenous to the market. The European doctrine focuses on static efficiency, making prices converge towards marginal costs, thereby eliminating mark-ups over steady state competitive prices. By doing so, it rejects the endogenous drivers of dynamic efficiency. Endogenous growth theory has shown that technological progress is not brought to the market, but is instead created by the market. It is the result of private investment decisions. It stems from the accumulation of knowl

Proceedings ArticleDOI
06 Jul 2014
TL;DR: In this article, the authors present how it is possible to still increase the global capacity of the currently deployed 100 Gbps based multi-layer transport networks thanks to the introduction of the sub-wavelength switching concept on future flexible optical add-drop multiplexers (FOADM).
Abstract: In this paper, we present how it is possible to still increase the global capacity of the currently deployed 100 Gbps based multi-layer transport networks thanks to the introduction of the sub-wavelength switching concept on future flexible optical add-drop multiplexers (FOADM). Such concept relies on the capabilities of the flexible node to adjust the optical channel characteristics to the service demand requirements. The technical solution uses coherent orthogonal frequency-division multiplexing as modulation to manage, with a very fine granularity, sub-wavelength optical switching. Multi-band techniques create super-channel to gather sub-wavelength together in order to increase the optical channel transmission capacity up to ultra-high data rate such as 1 Tbps. After having demonstrated performances of the solution, we evaluate, through typical use cases, the opportunity to introduce such flexible networking concept in future multi-layer transport networks and detail its advantages in terms of aggregation. Beyond techno-economic comparisons, the study integrates operational constraints and outlines the basis of future optical network planning.

Proceedings ArticleDOI
14 Sep 2014
TL;DR: An index to evaluate HMM-based speech synthesis system which takes into account the relative variation of the KLDs on test sets of synthetic and natural speech, which correlates inversely with the result of the MOS (mean opinion score) perceptual test.
Abstract: In this paper, we propose a new objective evaluation method for hidden Markov model (HMM)-based speech synthesis using Kullback-Leibler divergence (KLD). The KLD is used to measure the difference between the probability density functions (PDFs) of the acoustic feature vectors extracted from natural training and synthetic speech data. For the evaluation, Gaussian mixture model (GMM) is used to model the distribution of acoustic feature vectors, including the fundamental frequency (F0). Continuous F0, obtained with linear interpolation, is used in the evaluation. In essence, the KLD is the expectation of the logarithmic difference between the likelihoods calculated on training and synthetic speech. This likelihood difference is appropriate to characterize the quality of a HMM-based speech synthesis system in generating synthetic speech using a maximum likelihood criterion. The objective evaluation is tested with 3 different HMM-based speech synthesis systems which use multi-space distribution (MSD) to model discontinuous F0. These systems are trained on a common speech corpus in French. We propose an index to evaluate HMM-based speech synthesis system which takes into account the relative variation of the KLDs on test sets of synthetic and natural speech. This index correlates inversely with the result of the MOS (mean opinion score) perceptual test.

Journal ArticleDOI
TL;DR: KRAMER is presented, a recommender system that enriches social computing principle with that notion of situation-awareness, and its evaluation in a form of a special user test-game.
Abstract: In modern societies the process of communication is greatly influenced by information technology and computer systems. Social interactions in both real-life and cyber communities are frequently being shaped by two main features of social computing tools: (1) sharing great deal of information with whole groups of consumers, and (2) deriving collective intelligence by collaborative information evaluation, discussion, annotation, etc. The latter is further supported by reasoning mechanisms implemented in software to derive more pertinent and synthesized information for its consumers, e.g. recommendations. In consequence, communities are empowered to make more than ever informed conclusions and decisions. In our work we consider situations that people find themselves in, as pieces of information frequently driving decision making in classical human relations. We argue that augmenting social intelligence can be achieved by both (1) facilitating sharing context among community members, and (2) encouraging their collaborative effort to learn about importance of certain situations. We present KRAMER, a recommender system that enriches social computing principle with that notion of situation-awareness. In this paper we discuss our system putting stress on its social computing mechanisms. We present also its evaluation in a form of a special user test-game.