scispace - formally typeset
Search or ask a question

Showing papers by "Orange S.A. published in 2015"


Journal ArticleDOI
TL;DR: In this paper, the authors deal with the inclusion of micro encapsulated phase change materials (PCM) up to 29% in volume, in concretes and mortars.

139 citations


Proceedings ArticleDOI
23 Aug 2015
TL;DR: The dataset is composed of a large number of manually annotated text images that were extracted from Arabic TV broadcast and is the first public dataset dedicated to the development and the evaluation of video Arabic OCR techniques.
Abstract: This paper proposes a dataset, called ALIF, for Arabic embedded text recognition in TV broadcast. The dataset is publicly available for a non-commercial use. It is composed of a large number of manually annotated text images that were extracted from Arabic TV broadcast. It is the first public dataset dedicated to the development and the evaluation of video Arabic OCR techniques. Text images in the dataset are highly variable in terms of text characteristics (fonts, sizes, colors…) and acquisition conditions (background complexity, low resolution, non-uniform luminosity and contrast…). Moreover, an important part of the dataset is finely annotated, i.e. the text in an image is segmented into characters, paws and words, and each segment is labeled. The dataset can hence be used for both segmentation-based and segmentation-free text recognition techniques. In order to illustrate how the ALIF dataset can be used, the results of an evaluation study that we have conducted on different techniques for Arabic text recognition are also presented.

45 citations


Proceedings ArticleDOI
23 Aug 2015
TL;DR: This paper focuses on recognizing Arabic embedded text in videos using Convolutional Neural Networks and Deep Auto-Encoders and a connectionist recurrent approach to predict correct transcriptions of the input image from the associated sequence of features.
Abstract: This paper focuses on recognizing Arabic embedded text in videos. The proposed methods proceed without applying any prior pre-processing operations or character segmentation. Difficulties related to the video or text properties are faced using a learned robust representation of the input text image. This is performed using Convolutional Neural Networks and Deep Auto-Encoders. Features are computed using a multi-scale sliding window scheme. A connectionist recurrent approach is then used. It is trained to predict correct transcriptions of the input image from the associated sequence of features. Proposed methods are extensively evaluated on a large video database recorded from several Arabic TV channels.

39 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the presence of network externalities in the diffusion of MSN in France, the UK, the US and Germany, and compare estimates of two diffusion models: the Bass model and the Bemmaor model.

22 citations


Journal ArticleDOI
TL;DR: In this paper, advanced electron microscopy techniques are combined for the first time to measure the composition, strain, and optical luminescence, of InGaN/GaN multi-layered structures down to the nanometer scale.
Abstract: Advanced electron microscopy techniques are combined for the first time to measure the composition, strain, and optical luminescence, of InGaN/GaN multi-layered structures down to the nanometer scale. Compositional fluctuations observed in InGaN epilayers are suppressed in these multi-layered structures up to a thickness of 100 nm and for an indium composition of 16%. The multi-layered structures remain pseudomorphically accommodated on the GaN substrate and exhibit single-peak, homogeneous luminescence so long as the composition is homogeneous.

20 citations


Proceedings ArticleDOI
11 May 2015
TL;DR: The approach consists in supplying the extra network capacity at dense traffic areas using directive beams: the Virtual Small Cells (VSCs), created at the macrocell by the activation of multiple antenna elements and replace the use of picocells, reducing consequently the OPEX and CAPEX expenses.
Abstract: In 4G the introduction of small cells is the solution to cope with dense traffic areas. The deployment of new equipment implies a non- negligible cost in terms of backhauling, sites acquisition, maintenance and network energy consumption. Advances on large antenna arrays research permit conceiving the use of massive antennas elements at the macrocell. Our approach consists in supplying the extra network capacity at dense traffic areas using directive beams: the Virtual Small Cells (VSCs). VSCs are created at the macrocell by the activation of multiple antenna elements and replace the use of picocells, reducing consequently the OPEX and CAPEX expenses. VSCs are reconfigurable according to traffic fluctuation, can increase the throughput in 50% regarding a only-macrocell deployment and reduce the global network consumed power.

17 citations



Proceedings ArticleDOI
06 Sep 2015
TL;DR: A new way to integrate semantic relations into a topic segmentation process is proposed by defining the notion of semantic cohesion by proposing a new protocole to gather relevant data for semantic relations computation, showing that a small set of diachronic data can be more relevant for the task than using a large amount of general or asynchronous data.
Abstract: This paper proposes a new way to integrate semantic relations into a topic segmentation process by defining the notion of semantic cohesion. In the context of a sliding window based automatic topic segmentation algorithm, semantic relations are incorporated in the similarity measure between adjacent blocs. Additionaly, in the context of TV Brodcast News topic segmentation, we propose a new protocole to gather relevant data for semantic relations computation, showing that a small set of diachronic data can be more relevant for the task than using a large amount of general or asynchronous data. Experiments on a corpus of 86 various French TV Broadcast News shows recorded during one week, in conjunction with text articles collected through the Google News homepage at the same period for semantic relation estimation show significant improvement in topic segmentation performance.

12 citations



Journal ArticleDOI
TL;DR: The robustness of the proposed hybrid scheme and the validity of the theoretical analysis is proved by the experimental results, and the average bit error rate is used to exactly analyze the performance of the optimum hybrid multi-watermarking decoders.
Abstract: Hybrid multi-watermarking uses different embedding rules to embed one or multiple watermarks into the same region of the cover imperceptibly and alternatively. It is considered as a potential way to implement the copyright protection of the cover and the trace of illegal distribution in multi-party content distribution. However, there are some open issues in creating a good hybrid multi-watermarking scheme, including the combination among multiple embedding rules and the combination among multiple watermarks. In this paper, we establish two novel hybrid multi-bit additive multi-watermarking models, and a novel hybrid multi-watermarking scheme based on the models, which embeds the multiple watermarks into DWT coefficients controlled by a secret key. Nextly, the hybrid multi-watermarking decoders, i.e., optimum and locally optimum, are deduced based on the minimum Bayesian risk criterion and the generalized Gaussian distribution. To evaluate the schemes' performance, the average bit error rate is used to exactly analyze the performance of the optimum hybrid multi-watermarking decoders. Furthermore, the security of the proposed hybrid scheme is compared with that of the existing schemes. Finally, the robustness of the proposed hybrid scheme and the validity of the theoretical analysis is proved by the experimental results.

9 citations



Proceedings ArticleDOI
01 Sep 2015
TL;DR: In this paper, the authors compared the potential economic benefits of the proposed AOTG-capable network to legacy solutions in a network planning scenario, quantifying the spectrum and total equipment cost savings resulting from the use of AOTGs, and assesses the profitability of a proposed solution.
Abstract: Thanks to recent advances in switching technology enabled by novel ultra-fine granularity filters, it is now possible to perform all-optical traffic grooming (AOTG) of low-rate signals over flexi-grid networks. To evaluate the potential benefits of the introduction of this new capability, a precise techno-economic analysis, taking into account the cost of the enabling transceivers and ROADMs, needs to be carried out. In this paper, we compare the potential economic benefits of the proposed AOTG-capable network to legacy solutions in a network planning scenario. The paper quantifies the spectrum and total equipment cost savings resulting from the use of AOTG, and assesses the profitability of the proposed solution.

Patent
19 Feb 2015
TL;DR: In this paper, the access control of access to at least one digital content is managed as a function of at least access criterion, where the access criterion is stored in the user's computer as the function of an identifier.
Abstract: Control of access to at least one digital content is managed as a function of at least one access criterion. The digital content is transmitted to at least one terminal in the form a data stream. The access criterion is stored in the terminal as a function of an identifier. The terminal receives the data stream in association with a control message indicating the identifier. It then retrieves the stored access criterion as a function of the identifier received in the control message. Finally, it verifies whether the stored access criterion is satisfied in order, where appropriate, to authorize access to the content.

Journal ArticleDOI
TL;DR: In this paper, the authors studied congestion periods in a finite fluid buffer when the net input rate depends upon a recurrent Markov process; congestion occurs when the buffer content is equal to the buffer capacity.
Abstract: We study congestion periods in a finite fluid buffer when the net input rate depends upon a recurrent Markov process; congestion occurs when the buffer content is equal to the buffer capacity. Similarly to O’Reilly and Palmowski (2013), we consider the duration of congestion periods as well as the associated volume of lost information. While these quantities are characterized by their Laplace transforms in that paper, we presently derive their distributions in a typical stationary busy period of the buffer. Our goal is to compute the exact expression of the loss probability in the system, which is usually approximated by the probability that the occupancy of the infinite buffer is greater than the buffer capacity under consideration. Moreover, by using general results of the theory of Markovian arrival processes, we show that the duration of congestion and the volume of lost information have phase-type distributions.

Journal ArticleDOI
Luisa Rossi1
TL;DR: In this article, the authors propose a new classification of digital services and propose principles that should drive the reorganisation of associated obligations, such as transparency and non-discrimination, security, privacy, data retention, emergency services, interoperability and portability.
Abstract: The opinions expressed in this article are those of the author and do not necessarily represent the positions of Orange The internet offers an ever richer choice of digital services delivered via telecommunication networks that are overall of great benefit to consumers in terms of choice, but that require the updating of the current legal framework to ensure effective customer protection and to preserve the public interest, especially concerning such transverse issues as transparency, non-discrimination, security and privacy The market is in transition, notably for voice and messaging services, with an increasing number of services such as those provided by “Over The Top” (OTT) internet players, existing alongside traditional services still provided by telecoms operators and other services that result from partnerships between telcos and OTT providers or might even be produced by telcos in an OTT like fashion The time when there was a clear distinction between electronic communication services (ECS), as produced by telcos, and information society services (ISS), only produced by OTTs, is over The old rules are no longer adequate and yet still apply, while new issues are not addressed and require action This is why it is now important for the legislative framework and regulatory practices to embrace this phase of development The European Commission has acknowledged the need for reforms and now needs to adopt a comprehensive approach to this task in order to fulfil the promise of creating the right conditions to stimulate growth in the digital market in Europe The starting point for the reforms should be the creation of a digital services category with the reclassification of traditional communication services, followed by the reorganisation of the associated obligations such as transparency and non-discrimination, security, privacy, data retention, emergency services, interoperability and portability Hence, digital services would be subject to a common set of rules enshrined in a new horizontal European legislation, whichever the provider or the technology used Such an approach should be preferred to sector specific rules This new horizontal text should be combined with the review of the current electronic communications framework to limit it to Electronic Communication Networks (ECN) and Internet Access Services (IAS), excluding all other communication services, such as traditional telephony, that could then be covered by the new horizontal instrument, as would VoIP telephony An assessment of the impact of these changes on the Framework Directives and other related regulations shows that they would lead to welcome clarification and updates This paper recommends a new classification of digital services and proposes principles that should drive the reorganisation of associated obligations It provides a future structure for clear and effective provisions supporting consumer protection, the defence of public interests and fair competition

Journal ArticleDOI
TL;DR: In the European Union, the objective of competition policy is to unconditionally increase or at least maintain static competition intensity, irrespective of the market situation and the characteristics of industries, notably in terms of technological evolution as mentioned in this paper.
Abstract: Competition policy in the European Union is built on the principle that the exercise of market power is a source of inefficiency and as such should be prevented or eliminated. According to this doctrine, effective competition exempt from market power is the source of economic growth. Observation shows that the objective of the European competition policy is to unconditionally increase or at least maintain static competition intensity, irrespective of the market situation and the characteristics of industries, notably in terms of technological evolution. The European Commission acknowledges that the low productivity in the Union is mainly due to insufficient investment and innovation. However to restore investment it strives to increase competition to eliminate mark-ups over competitive prices. This interpretation of competition policy is specific to the European Union. Following the influence of the Chicago School, the US competition authorities consider that market power is both a necessary incentive to invest and a fair return on investment. It is well-established in economic theory that investment in innovation is endogenous to the market structure and that it might decrease when static competition exceeds an optimal level, a fact that European competition policy fails to take into account in its unconditional quest for the maximum static efficiency. A review of this policy, taking into account the outcome of the US approach and the lessons of economic growth theories appears necessary in order to tackle the structural weakness of productivity growth in the Union.

Proceedings Article
18 Mar 2015
TL;DR: A strategy for speaker identification based on enriching the speaker diarization by features related to the ”understanding” of the video scenes: text overlay transcription and analysis, automatic situation identification, the amount of people visible, TV set disposition and even the camera when available is presented.
Abstract: This paper describes a multi-modal person recognition system for video broadcast developed for participating to the REPERE challenge, that was organized jointly by the DGA and the ANR (French Research National Agency). The main track of this challenge targets the identification of all persons occurring in a video either. The main scientific issue addressed by this challenge is the combination of audio and video information extraction processes for improving the extraction performance in both modalities. In this paper, we present a strategy for speaker identification based on enriching the speaker diarization by features related to the ”understanding” of the video scenes: text overlay transcription and analysis, automatic situation identification (TV set, report), the amount of people visible, TV set disposition and even the camera when available. Experiments on the REPERE corpus show interest of the proposed approach.

Proceedings ArticleDOI
09 Nov 2015
TL;DR: There is potential to deliver up to 60% of requests for popular content via D2D, if a reliable mechanism to predict a user's content consumption is available, according to a quantitative answer to the question of this potential.
Abstract: Device-to-Device (D2D) content delivery is an emerging approach, where end-user devices exchange content with other end-user devices in communication range, instead of retrieving content from an operator's infrastructure. This way, the operator network can be offloaded from congestion caused by the transmission of popular content, and the content consumer's quality of experience may increase. However, D2D content delivery is only effective in situations where a device in proximity has the requested content available, which is more likely to happen with popular content in crowded areas. The availability of content in communication range of a consumer constitutes an upper bound of the success of a D2D content delivery mechanism, which is referred to as the potential of D2D delivery. This paper provides a quantitative answer to the question of this potential, and identifies the most important properties a D2D mechanism must provide. An evaluation model is proposed and developed, which can be applied to real-world mobile user traces to determine the quota of content requests that could be served via D2D content delivery. The model is applied on a dataset of a major European Internet service provider and the evaluation results are discussed. The paper concludes that there is potential to deliver up to 60% of requests for popular content via D2D, if a reliable mechanism to predict a user's content consumption is available.



Journal ArticleDOI
Julienne Liang1
TL;DR: In this article, the authors estimate the interaction between fixed and mobile usage both for voice and data services using consumer level data from April 2013 to March 2014 in a European country and find that a significant proportion of fixed voice consumption could be substituted by mobile voice, and vice versa.
Abstract: We estimate the interaction between fixed and mobile usage both for voice and data services using consumer level data from April 2013 to March 2014 in a European country. We find a significant proportion of fixed voice consumption could be substituted by mobile voice, and vice versa. However, a substantial proportion of fixed data consumption, and also mobile data consumption, is generated by fixed-mobile interaction in both directions. FTTH deployment has no significant impact on fixed-mobile voice substitution. The fixed data consumption generated by mobile to fixed interaction appears to increase with FTTH deployment.

Proceedings Article
11 Oct 2015
TL;DR: An overvi ew of optical wireless communication technologies, the system and standards, the user requirements; business cases and some use cases are presented.
Abstract: Optical Wireless Communications (OWC) refer to communication based on the unguided propagation of optic waves. This technique was the only wireless communication solution for millennia and the past 30 years have seen a significant improvement in two main are as: Outdoor applications, i.e., FSO (Free Space Optic), communications between satellites or ground/air transmission; and Indoor application like the remote controller and Light Fidelity (LiFi) system. Orange Labs has investigated, through open innovation, the potentia l for PmP (Point to multiPoint) indoor application of this technology. Light Fidelity solution may be a wireless alternative to radio systems and could gain attractiveness in case of sa turation of the radio spectrum. The paper will present an overvi ew of optical wireless communication technologies, the ec osystem and standards. Before conclusion, some use cases ar e presented. Keyword-Light Fidelity (LiFi); Visible Light Communication (VLC); Infrared Communication (IRC); Optical Wireless Communication (OWC); user requirements; business cases.



Proceedings ArticleDOI
01 Aug 2015
TL;DR: This paper introduces a methodology that would allow to study the performance of enhanced access selection mechanisms, which include the willingness to reduce such exposure, by complementing other types of analysis that do not offer the degree of flexibility that would be required to understand theperformance of these novel resource management solutions.
Abstract: In spite of the growing presence of wireless communications, some concerns have recently loomed about the potential risks of the exposure to the electromagnetic fields they induce. In this paper we introduce a methodology that would allow to study the performance of enhanced access selection mechanisms, which include the willingness to reduce such exposure. The proposed framework has been conceived with the main goal of complementing other types of analysis that, despite providing a much more accurate characterization of the physical parameters, do not offer the degree of flexibility that would be required to understand the performance of these novel resource management solutions.

Journal ArticleDOI
TL;DR: This paper describes the existence of Nash equilibria in a sharing of a radio access network infrastructure by two mobile operators, and measures their quality with respect to the maximization of the overall profit.
Abstract: In this paper, we study the sharing of a radio access network infrastructure by two mobile operators. Knowing the possible locations of the base stations, each operator chooses to invest or not on a base station, and its aim is to maximize its profit. We characterize the existence of Nash equilibria in such a game and we measure their quality with respect to the maximization of the overall profit (with the price of anarchy/stability). We then show how to obtain a solution in which each operator earns at least as much as it would have earned in any Nash equilibrium. Finally we conduct experiments on randomly generated instances and on real data.

Proceedings ArticleDOI
06 Sep 2015
TL;DR: It is shown that linguistic and speaker-distribution based features can lead to an efficient characterization of these genres when the boundaries of the chapters are known, and that a speakerdist distribution based segmentation is suitable for segmenting contents into these different genres.
Abstract: Modern TV or radio news talk-shows include a variety of sequences which comply with specific journalistic patterns, including debates, interviews, reports. The paper deals with automatic chapter generation for TV news talk-shows, according to these different journalistic genres. It is shown that linguistic and speaker-distribution based features can lead to an efficient characterization of these genres when the boundaries of the chapters are known, and that a speakerdistribution based segmentation is suitable for segmenting contents into these different genres. Evaluations on a collection of 42 episodes of a news talk-show provided by the French evaluation campaign REPERE show promising performance.

Journal ArticleDOI
TL;DR: An algorithm is proposed to summarize sports videos based on viewpoints in TV broadcasts for sports genre classification to remove redundancy and make full use of the high overlap of selected key-frames subset.
Abstract: In this paper, an algorithm is proposed to summarize sports videos based on viewpoints in TV broadcasts for sports genre classification. The redundancy of multiple views is one of the principal limitations in sports genre classification. In order to remove the redundancy, the algorithm chooses the most representative subset of shots from each game. After videos are broken into shots, single keyframe is utilized to represent each shot and uniform LBP feature is extracted to represent each keyframe. Agglomerative hierarchical clustering is then performed to cluster these keyframes. In this step, an energy-based function for clusters is introduced to match the statistical distribution of various views, and a refined distance metric is proposed as similarity measure of two shots. We modify the energy function to meet the fact that temporally neighbored shots with similar duration are more likely to be in the same views. To make full use of the high overlap of selected key-frames subset, sparse coding and geometry visual phrase are introduced in the sports genre categorization part. Our method is evaluated on videos recorded from Orangesports, ESPN and Eurosport TV broadcast. The average accuracy over 10 sports reaches 87.5 %. The proposed algorithm is already applied in the Orange TV video content delinearization service platform.


Book ChapterDOI
01 Jan 2015
TL;DR: In this paper, the characteristics of first adopters of radically new products in the field of information technology combining telecommunications and television audiovisual functionality are examined, identifying whether early owners of these new products share common characteristics with owners of older telecommunications or TV-based equipment.
Abstract: This paper examines the characteristics of first adopters of radically new products in the field of information technology combining telecommunications and television audiovisual functionality. More precisely, we are interested in identifying whether early owners of these new products share common characteristics with owners of older telecommunications or TV-based equipment. We have built a typology based on a phone survey of 617 households representative of the French population. The results lead to three groups: under-equipped, screen equipped and keyboard equipped households. This typology is crossed with an other one relative to the cultural behavior of the households. Finally, some insights about the way to launch new products in that Field are given.