scispace - formally typeset
Search or ask a question

Showing papers in "Annales Des Télécommunications in 2013"


Journal ArticleDOI
TL;DR: A comprehensive review on visual fatigue and discomfort induced by the visualization of 3D stereoscopic contents, in the light of physiological and psychological processes enabling depth perception is proposed.
Abstract: The quality of experience (QoE) of 3D contents is usually considered to be the combination of the perceived visual quality, the perceived depth quality, and lastly the visual fatigue and comfort. When either fatigue or discomfort are induced, studies tend to show that observers prefer to experience a 2D version of the contents. For this reason, providing a comfortable experience is a prerequisite for observers to actually consider the depth effect as a visualization improvement. In this paper, we propose a comprehensive review on visual fatigue and discomfort induced by the visualization of 3D stereoscopic contents, in the light of physiological and psychological processes enabling depth perception. First, we review the multitude of manifestations of visual fatigue and discomfort (near triad disorders, symptoms for discomfort), as well as means for detection and evaluation. We then discuss how, in 3D displays, ocular and cognitive conflicts with real world experience may cause fatigue and discomfort; these includes the accommodation–vergence conflict, the inadequacy between presented stimuli and observers depth of focus, and the cognitive integration of conflicting depth cues. We also discuss some limits for stereopsis that constrain our ability to perceive depth, and in particular the perception of planar and in-depth motion, the limited fusion range, and various stereopsis disorders. Finally, this paper discusses how the different aspects of fatigue and discomfort apply to 3D technologies and contents. We notably highlight the need for respecting a comfort zone and avoiding camera and rendering artifacts. We also discuss the influence of visual attention, exposure duration, and training. Conclusions provide guidance for best practices and future research.

104 citations


Journal ArticleDOI
TL;DR: The Aloha medium access (MAC) scheme in 1D, linear networks, which might be an appropriate assumption for vehicular ad hoc networks, is studied and it is shown that in contrast to planar networks the density of packet progress per unit of length does not increase with the network node density.
Abstract: The aim of this paper is to study the Aloha medium access (MAC) scheme in 1D, linear networks, which might be an appropriate assumption for vehicular ad hoc networks. We study performance metrics based on the signal over interference plus noise ratio assuming power-law mean path-loss and independent point-to-point fading. We derive closed formulas for the capture probability. We consider the presence or the absence of noise and we study performance with outage or with adaptive coding. We carry out the joint optimization of the density of packet progress (in bit-meters) both in the transmission probability and in the transmission range. We also compare the performance of slotted and non-slotted Aloha. We show that in contrast to planar networks the density of packet progress per unit of length does not increase with the network node density.

59 citations


Journal ArticleDOI
TL;DR: The results of the integrated model analysis indicate that system satisfaction is a core determinant of intention to use LTE services, and the model found that other factors, including perceived usefulness and system and service quality, significantly affect intention toUse these services.
Abstract: With an integrated framework, this paper aims to analyze user perception and acceptance toward long-term evolution (LTE) services, focusing on factors that may influence the intention to use. We conducted a web-based survey of 1,192 users to test our research model. We employed structural equation modeling (SEM) as the analysis method. The results of the integrated model analysis indicate that system satisfaction is a core determinant of intention to use LTE services. The model also found that other factors, including perceived usefulness and system and service quality, significantly affect intention to use these services. In addition, both perceived adaptivity and processing speed significantly influence perceived usefulness and service quality, respectively. These factors also play key roles in determining users’ attitudes. This paper is of value to researchers and engineers designing and improving LTE services for use via mobile phones.

48 citations


Journal ArticleDOI
TL;DR: A multirate loss model that supports elastic and adaptive traffic, under the assumption that calls arrive in a single link according to a batched Poisson process, which is a more “bursty” process than the Poissons process, and found to be quite satisfactory.
Abstract: The ever increasing demand of elastic and adaptive services, where in-service calls can tolerate bandwidth compression/expansion, together with the bursty nature of traffic, necessitates a proper teletraffic loss model which can contribute to the call-level performance evaluation of modern communication networks. In this paper, we propose a multirate loss model that supports elastic and adaptive traffic, under the assumption that calls arrive in a single link according to a batched Poisson process (a more “bursty” process than the Poisson process, where calls arrive in batches). We assume a general batch size distribution and the partial batch blocking discipline, whereby one or more calls of a new batch are blocked and lost, depending on the available bandwidth of the link. The proposed model does not have a product form solution, and therefore we propose approximate but recursive formulas for the efficient calculation of time and call congestion probabilities, link utilization, average number of calls in the system, and average bandwidth allocated to calls. The consistency and the accuracy of the model are verified through simulation and found to be quite satisfactory.

47 citations


Journal ArticleDOI
TL;DR: The presented paper demonstrates how metamaterials with their unique properties and structures derived from metammaterials can offer solutions to overcome technical limitations of passive and chipless wireless sensor and RFID concepts.
Abstract: The presented paper demonstrates how metamaterials with their unique properties and structures derived from metamaterials can offer solutions to overcome technical limitations of passive and chipless wireless sensor and RFID concepts. Basically, the metamaterial approach allows for miniaturization, higher sensitivity, and an extreme geometric flexibility. Miniaturization is certainly important for both, sensing and identification, while higher sensitivity is primarily applicable to sensors. The geometric flexibility is at first important for sensing since it allows for novel sensor concepts. But at least concerning buildup technology, also RFID concepts can benefit from this advantage. The presented examples of metamaterial-inspired passive chipless RFID and wireless sensing can be assigned to the following three categories: metamaterial resonator approaches, composite right/left-handed lines, and frequency-selective surfaces. In this paper, these different concepts are evaluated and discussed with regard to the metamaterial properties. Furthermore, criteria and figures of merit are given, which allow for a fair comparison of passive, chipless concepts and beyond. Finally, these criteria are applied to the presented sensor and identification concepts.

30 citations


Journal ArticleDOI
TL;DR: A multi-criteria decision algorithm for efficient content delivery applicable for content networks in general and evaluated by simulation using Internet scale network model to confirm the effectiveness gain of content network architectures that introduce network awareness.
Abstract: Today's Internet is prominently used for content distribution. Various platforms such as content delivery net- works (CDNs) have become an integral part of the digital content ecosystem. Most recently, the information-centric networking (ICN) paradigm proposes the adoption of native content naming for secure and efficient content delivery. This further enhances the flexibility of content access where a content request can be served by any source within the Internet. In this paper, we propose and evaluate a multi- criteria decision algorithm for efficient content delivery applicable for content networks in general (among others, CDN and ICN). Our algorithm computes the best available source and path for serving content requests taking into account information about content transfer requirements, location of the consumer, location of available content serv- ers, content server load and content delivery paths between content servers and consumer. The proposed algorithm exploits two closely related processes. The first level dis- covers multiple content delivery paths and gathers their respective transfer characteristics. This discovery process is based on long-term network measurements and performed offline. The second process is invoked for each content request to find the best combined content server and deliv- ery path. The cooperation between both levels allows our algorithm to increase the number of satisfied content requests thanks to efficient utilisation of network and server resources. The proposed decision algorithm was evaluated by simulation using Internet scale network model. The results confirm the effectiveness gain of content network architectures that introduce network awareness. Moreover, the simulation process allows for a comparison between different routing algorithms and, especially, between single and multipath routing algorithms.

30 citations


Journal ArticleDOI
TL;DR: This work presents an Internet-scale mediation approach for content access and delivery that supports content and network mediation, and presents in detail the coupled mode of operation which is used for popular content and follows a domain-level hop-by-hop content resolution approach to optimally identify the best content copy.
Abstract: Given that the vast majority of Internet interactions relate to content access and delivery, recent research has pointed to a potential paradigm shift from the current host-centric Internet model to an information-centric one. In information-centric networks, named content is accessed directly, with the best content copy delivered to the requesting user given content caching within the network. Here, we present an Internet-scale mediation approach for content access and delivery that supports content and network mediation. Content characteristics, server load, and network distance are taken into account in order to locate the best content copy and optimize network utilization while maximizing the user quality of experience. The content mediation infrastructure is provided by Internet service providers in a cooperative fashion, with both decoupled/two-phase and coupled/one-phase modes of operation. We present in detail the coupled mode of operation which is used for popular content and follows a domain-level hop-by-hop content resolution approach to optimally identify the best content copy. We also discuss key aspects of our content mediation approach, including incremental deployment issues and scalability. While presenting our approach, we also take the opportunity to explain key information-centric networking concepts.

29 citations


Journal ArticleDOI
TL;DR: Some design rules to create a chipless RFID tag that encodes the information in the frequency domain and the frequency optimization step for each resonant peak will be discussed.
Abstract: In this paper, we present some design rules to create a chipless RFID tag that encodes the information in the frequency domain. Some criterions are introduced to make the best choice concerning the elementary scatterers that act like signal processing antennas. The performance of several scatterers will be compared before a study on the radiating properties of a versatile C-like scatterer. An electrical model as well as a transfer function model is presented to best understand the frequency response of both a single-layer and a grounded scatterer. An example of the design and the optimization of a chipless RFID tag based on the use of multiple scatterers are provided, and the frequency optimization step for each resonant peak will be discussed.

28 citations


Journal ArticleDOI
TL;DR: The wireless measurement of various physical quantities are presented from the analysis of the radar cross section variability of passive electromagnetic sensors using a millimeter frequency-modulated continuous-wave radar for both remote sensing and wireless identification of sensors.
Abstract: In this paper, we present the wireless measurement of various physical quantities from the analysis of the radar cross section variability of passive electromagnetic sensors. The technique uses a millimeter frequency-modulated continuous-wave radar for both remote sensing and wireless identification of sensors. Long reading ranges (up to some decameters) are reached at the expense of poor measurement resolution (typically 10 %). A review of recent experimental results is reported for illustration purposes.

28 citations


Journal ArticleDOI
TL;DR: It is seen that, coupled with revolutionary design of low-cost tag antennas, fabrication/reconfiguration by printing techniques, moving to higher frequencies to shrink tag sizes and reduce manufacturing cost, as well as innovation in ID generating circuits to increase coding capacities, will be important research topics towards item-level tracking applications of chipless RFID tags.
Abstract: This paper reviews recent advances in fully printed chipless radio frequency identification (RFID) technology with special concern on the discussion of coding theories, ID generating circuits, and tag antennas. Two types of chipless tags, one based on time-domain reflections and the other based on frequency domain signatures, are introduced. To enable a fully printed encoding circuit, linearly tapering technique is adopted in the first type of tags to cope with parasitic resistances of printed conductors. Both simulation and measurement efforts are made to verify the feasibility of the eight-bit fully printed paper-based tag. In the second type of tags, a group of LC tanks are exploited for encoding data in frequency domain with their resonances. The field measurements of the proof-of-concept of the tag produced by toner-transferring process and flexible printed circuit boards are provided to validate the practicability of the reconfigurable ten-bit chipless RFID tag. Furthermore, a novel RFID tag antenna design adopting linearly tapering technique is introduced. It shows 40 % save of conductive ink materials while keeping the same performance for conventional half-wave dipole antennas and meander line antennas. Finally, the paper discusses the future trends of chipless RFID tags in terms of fabrication cost, coding capacity, size, and reconfigurability. We see that, coupled with revolutionary design of low-cost tag antennas, fabrication/reconfiguration by printing techniques, moving to higher frequencies to shrink tag sizes and reduce manufacturing cost, as well as innovation in ID generating circuits to increase coding capacities, will be important research topics towards item-level tracking applications of chipless RFID tags.

25 citations


Journal ArticleDOI
TL;DR: A delay- and energy-aware cooperative medium access control (DEC-MAC) protocol, which trades off between the packet delivery delay and a node’s energy consumption while selecting a cooperative relay node, which improves the end-to-end packet delivery latency and the network lifetime significantly compared to the state-of-the-art protocols.
Abstract: This paper deals with two critical issues in wireless sensor networks: reducing the end-to-end packet delivery delay and increasing the network life- time through the use of cooperative communications. Here, we propose a delay- and energy-aware coop- erative medium access control (DEC-MAC) protocol, which trades-off between the packet delivery delay and a node's energy consumption while selecting a cooper- ative relay node. DEC-MAC attempts to balance the energy consumption of the sensor nodes by taking into account a node's residual energy as part of the relay selection metric, thus increasing the network's lifetime. The relay selection algorithm exploits the process of elimination and the complementary cumulative dis- tribution function for determining the most optimal relay within the shortest time period. Our numerical analysis demonstrates that the DEC-MAC protocol is able to determine the optimal relay in no more than three mini slots. Our simulation results show that the DEC-MAC protocol improves the end-to-end packet

Journal ArticleDOI
TL;DR: A distributed antenna system with radio frequency (RF) transport over an optical fibre (or optical wireless in benign environments) distribution network is identified as best suited to wireless access in cluttered urban environments expected in a Digital City from an energy consumption perspective.
Abstract: Pervasive broadband access will transform cities to the net social, environmental and economic benefit of the e-City dweller as did the introduction of utility and transport network infrastructures. Yet without action, the quantity of greenhouse gas emissions attributable to the increasing energy consumption of access networks will become a serious threat to the environment. This paper introduces the vision of a ‘sustainable Digital City’ and then considers strategies to overcome economic and technical hurdles faced by engineers responsible for developing the information and communications technology (ICT) network infrastructure of a Digital City. In particular, ICT energy consumption, already an issue from an operating cost perspective, is responsible for 3 % of global energy consumption and is growing unsustainably. A grand challenge is to conceive of networks, systems and devices that together can cap wireless network energy consumption whilst accommodating growth in the number of subscribers and the bandwidth of services. This paper provides some first research directions to tackle this grand challenge. A distributed antenna system with radio frequency (RF) transport over an optical fibre (or optical wireless in benign environments) distribution network is identified as best suited to wireless access in cluttered urban environments expected in a Digital City from an energy consumption perspective. This is a similar architecture to Radio-over-Fibre which, for decades, has been synonymous with RF transport over analogue intensity-modulated direct detection optical links. However, it is suggested herein that digital coherent optical transport of RF holds greater promise than the orthodox approach. The composition of the wireless and optical channels is then linear, which eases the digital signal processing tasks and permits robust wireless protocols to be used end-to-end natively which offers gains in terms of capacity and energy efficiency. The arguments are supported by simulation studies of distributed antenna systems and digital coherent Radio-over-Fibre links.

Journal ArticleDOI
TL;DR: This paper proposes to adapt CBF to such challenging environment by first employing two different mechanisms as a function of the topology and second by considering the dissemination capabilities of the relays, allowing for example road-side units or tall vehicles to preferably act as relays when necessary.
Abstract: Contention-based forwarding (CBF) is a broadcasting technique used to disseminate emergency messages for traffic safety applications in intelligent transportation systems. Its design hypotheses have however been based on three major assumptions: uniform vehicular topology, nonfading channels and homogeneous communication capabilities. Realistic vehicular urban topologies do not comply with any of them, making CBF select relays, which may not exist, may not be reached or may not be optimal due to heterogeneous transmit capabilities. In this paper, we propose to adapt CBF to such challenging environment by first employing two different mechanisms as a function of the topology and second by considering the dissemination capabilities of the relays, allowing for example road-side units or tall vehicles to preferably act as relays when necessary. Our protocol, called Bi-Zone Broadcast, is evaluated in a realistic urban environment and showed to provide around 46 % improvement in dissemination delay and 40 % reduction in overhead compared to plain CBF or flooding. We finally shed light to other aspects of CBF that remain unsolved and should be addressed in future work to further improve the reliability of dissemination protocols for traffic safety protocols.

Journal ArticleDOI
TL;DR: It is found that the thermo-optic phase tuning departs the expected quadratic dependence and is well characterised by a quartic dependence upon heater current or voltage.
Abstract: The ability to steer optical beams, crucial to the operation of high-speed optical wireless links may be achieved using optical phased array antennas which have significant potential in this application. The beam formed by the phased array antennas is steered by tuning the relative phase difference between the adjacent antenna elements which may be achieved nonmechanically. In this paper, the characteristics and behaviour of two dimensional optical phased arrays with a structure composed of 2 × 2, 4 × 4, and 16 × 16 antenna elements in beam steering are verified. The wavelength beam steering of −0.16°/nm is measured along the θ direction with a required steering range (between main lobes) of 1.97° within a −3 dB envelop of 5° extent in the θ direction and 7° extent in the Φ direction. To achieve two-dimensional beam steering, thermo-optic beam steering can be used in Φ direction. It is found that the thermo-optic phase tuning departs the expected quadratic dependence and is well characterised by a quartic dependence upon heater current or voltage.

Journal ArticleDOI
TL;DR: The two main advantages compared to classical radiofrequency identification tags are the absence of metal and the encoding of the information in the volume of the structure, thus limiting the risk of damage during handling and preventing from reverse engineering, for example.
Abstract: In the present paper, we propose a new structure of chipless and low-cost tag for data encoding in the terahertz frequency range. The device is based on a multilayer structure in which the thicknesses of the different layers are of the order of the wavelength, i.e., in the submillimeter range. In this device, the information is encoded in the volume of the tag thanks to the adjustable refractive index and low-cost materials, leading to a high level of security. The two main advantages compared to classical radiofrequency identification tags are the absence of metal and the encoding of the information in the volume of the structure, thus limiting the risk of damage during handling and preventing from reverse engineering, for example.

Journal ArticleDOI
TL;DR: The proposed scheme’s construction avoids bilinear pairing operations but still provides signature in the ID-based setting and reduces running time heavily, which is more applicable than previous schemes in terms of computational efficiency for practical applications.
Abstract: Most of the previously proposed identity-based multiproxy multisignature (IBMPMS) schemes used pairings in their construction. But pairing is regarded as an expensive cryptographic primitive in terms of complexity. The relative computation cost of a pairing is approximately more than ten times of the scalar multiplication over elliptic curve group. So, to reduce running time, we first define a model of a secure MPMS scheme, then propose an IBMPMS scheme without using pairings. We also prove the security of our scheme against chosen message attack in random oracle model. Our scheme’s construction avoids bilinear pairing operations but still provides signature in the ID-based setting and reduces running time heavily. Therefore, proposed scheme is more applicable than previous schemes in terms of computational efficiency for practical applications.

Journal ArticleDOI
TL;DR: This paper investigates the elements impacting on the best bit- rate ratio between depth and color: total bit-rate budget, input data features, encoding strategy, and assessed view.
Abstract: Multi-view video plus depth (MVD) data offer a reliable representation of three-dimensional (3D) scenes for 3D video applications. This is a huge amount of data whose compression is an important challenge for researchers at the current time. Consisting of texture and depth video sequences, the question of the relationship between these two types of data regarding bit-rate allocation often raises. This paper questions the required ratio between texture and depth when encoding MVD data. In particular, the paper investigates the elements impacting on the best bit-rate ratio between depth and color: total bit-rate budget, input data features, encoding strategy, and assessed view.

Journal ArticleDOI
TL;DR: This paper adopts a rate-distortion framework based on a simplified model of depth and texture images, which preserves the main features ofDepth image-based rendering techniques for multiview applications and avoids rendering at encoding time for distortion estimation so that the encoding complexity stays low.
Abstract: Depth image-based rendering techniques for multiview applications have been recently introduced for efficient view generation at arbitrary camera positions. The rate control in an encoder has thus to consider both texture and depth data. However, due to different structures of depth and texture data and their different roles on the rendered views, the allocation of the available bit budget between them requires a careful analysis. Information loss due to tex- ture coding affects the value of pixels in synthesized views, while errors in depth information lead to a shift in objects or to unexpected patterns at their boundaries.In this paper, we address the problem of efficient bit allocation between texture and depth data of multiview sequences.We adopt a rate-distortion framework based on a simplified model of depth and texture images, which preserves the main features of depth and texture images. Unlike most recent solutions, our method avoids rendering at encoding time for distor- tion estimation so that the encoding complexity stays low.

Journal ArticleDOI
TL;DR: A fluid model approach is used to provide simpler outage probability expressions depending only on the distance between the considered user and its serving base station, allowing for fast and simple performance evaluation for the two multicellular wireless systems.
Abstract: In this paper, we study the performance of two downlink multicellular systems: a multiple inputs single output (MISO) system using the Alamouti code and a multiple inputs multiple outputs (MIMO) system using the Alamouti code at the transmitter side and a maximum ratio combining (MRC) as a receiver, in terms of outage probability. The channel model includes path-loss, shadowing, and fast fading, and the system is considered interference-limited. Two cases are distinguished: constant shadowing and log-normally distributed shadowing. In the first case, closed form expressions of the outage probability are proposed. For a log-normally distributed shadowing, we derive easily computable expressions of the outage probability. The proposed expressions allow for fast and simple performance evaluation for the two multicellular wireless systems: MISO Alamouti and MIMO Alamouti with MRC receiver. We use a fluid model approach to provide simpler outage probability expressions depending only on the distance between the considered user and its serving base station.

Journal ArticleDOI
TL;DR: A model for the generation and control of scanpaths that accounts for the "noisy" variation of the random visual exploration exhibited by different observers when viewing the same scene, or even by the same subject along different trials is presented.
Abstract: Foveation-based processing and communication systems can exploit a more efficient representation of images and videos by removing or reducing visual information redundancy, provided that the sequence of foveation points, the visual scanpath, can be determined. However, one point that is neglected by the great majority of foveation models is the “noisy” variation of the random visual exploration exhibited by different observers when viewing the same scene, or even by the same subject along different trials. Here, a model for the generation and control of scanpaths that accounts for such issue is presented. In the model, the sequence of fixations and gaze shifts is controlled by a saliency-based, information foraging mechanism implemented through a dynamical system switching between two states, “feed” and “fly.” Results of the simulations are compared with experimental data derived from publicly available datasets.

Journal ArticleDOI
TL;DR: The present paper is an introduction or foreword to the Special Edition of Annals of Telecommunications on "Radio-over-fibre for green wireless access networks".
Abstract: The present paper is an introduction or foreword to the Special Edition of Annals of Telecommunications on "Radio-over-fibre for green wireless access networks".

Journal ArticleDOI
TL;DR: This paper incorporates a tag using group delay encoding of chipless radio frequency identification (RFID) composed of commensurate cascaded transmission line sections coupled at alternative ends (also known as C-sections).
Abstract: Chipless radio frequency identification (RFID) is an emerging research area nowadays. The recent development in this area proves its efficiency to compete with low cost identification systems like barcodes in the upcoming years. Chipless RFID encodes data using different kinds of spectral signature produced from some planar images as in the case of barcodes, the difference here is those images are made with conductive materials. Among the different ways of information encoding, a powerful way of encoding is time domain approach. This paper incorporates a tag using group delay encoding. The proposed chipless tag is composed of commensurate cascaded transmission line sections coupled at alternative ends (also known as C-sections). It consists of single group of C-sections. However, in order to increase the coding capacity, the proposed tag can allow multi-frequencies also. In addition to this, the tag is also compatible with commercial ultra wide band radar. The proposed tag is validated experimentally. It exhibits a good reading range of 1.2 m.

Journal ArticleDOI
TL;DR: This paper uses artificial immune system (based on the clonal selection theory) to obtain the optimal solutions without any reformulations or mathematical costs and shows that the proposed algorithm outperforms the genetic algorithm used in the previous works.
Abstract: In cognitive radio technology, spectrum sensing enables users to sense the environment and find spectrum holes. Cooperative sensing is a good idea for reliable detection of primary users in shadowed environments. In this study, spatial spectral joint detection with some constraints that keep the interference at the primary user below a suitable level is considered as the optimization problem for collaborative sensing. Convex optimization is able to obtain near-optimal solutions because of the non-convexity nature of the optimization problem. In this paper, we use artificial immune system (based on the clonal selection theory) to obtain the optimal solutions without any reformulations or mathematical costs. Numerical results show that our proposed algorithm outperforms the genetic algorithm used in the previous works.

Journal ArticleDOI
TL;DR: Dataflow representations from the upcoming MPEG Reconfigurable Media Coding (RMC) standard are used to supply the decoding information to adaptive decoders and two optimizations based on dataflow representations and dynamic compilation are proposed that enhance flexibility and performance of multimedia applications.
Abstract: This paper proposes two optimization methods based on dataflow representations and dynamic compilation that enhance flexibility and performance of multimedia applications. These optimization methods are intended to be used in an adaptive decoding context, or, in other terms, where decoders have the ability to adapt their decoding processes according to a bitstream. This adaptation is made possible by coupling the decoding information to process a stream inside a coded stream. In this paper, we use dataflow representations from the upcoming MPEG Reconfigurable Media Coding (RMC) standard to supply the decoding information to adaptive decoders. The benefits claimed by MPEG RMC are a reuse of coding tools between different specifications of decoder and an execution scalability on different processing units with a single specification, which can target either hardware and/or software platforms. These benefits are not yet achievable in practice as these specifications are not used at the receiver side in MPEG RMC. We valid these benefits and propose two optimizations for the generation and the execution of dataflow models: the first optimization takes benefits of the reuse of coding tools to reduce the time to obtain—configure—enforceable decoders. The second provides an efficient, dynamic, and scalable execution according to the features of the execution platform. We show the practical impact of these two optimizations on two decoder representations compliant with the MPEG-4 part 2 Simple Profile standard and the MPEG-4 Advanced Video Coding standard. The results shows that configuration time can be reduced by 3 and the performance of decoders can be increased by 50 %.

Journal ArticleDOI
TL;DR: A novel invariant curved surface representation is constructed from the superposition of the two geodesic potentials generated from a given couple of surface points by sampling this continuous representation by finding an efficient approximation in the mean of the shape distance.
Abstract: In this paper, we intend to introduce a novel invariant curved surface representation under the 3D motion group. It is constructed from the superposition of the two geodesic potentials generated from a given couple of surface points. By sampling this continuous representation, invariant points are extracted from a large neighborhood around these reference points. Different numerical methods are implemented in order to find an efficient approximation in the mean of the shape distance. The inference of small distortions of points positions applied to the reference points is analyzed. We apply the proposed representation to real 3D images. The experimentations are performed on the 3D facial database Bosphorus.

Journal ArticleDOI
TL;DR: A model that correlates QoS parameters and QoE factors with impact on the variation of the user’s perception of the quality is described, applicable in many contexts, but essentially as a tool for service providers to estimate the rank customers may give to a content.
Abstract: The continuous improvement in the delivery of advanced video services, along with evolved technical conditions at the client side, contributed to the appearance of new methods for the evaluation of quality of service (QoS) and quality of experience (QoE) from the user’s point of view. This article describes the development of a model that correlates QoS parameters and QoE factors with impact on the variation of the user’s perception of the quality. A quality assessment test was performed with 40 participants that ranked more than 140 videos. The detailed analysis of the collected data from the test allowed to conclude that all the considered factors had significant impact on the perceived quality. Those factors were aggregated in a single model using linear regression techniques to join their behavior and associate adequate weights to each factor. The results from validity tests of the model were encouraging, achieving 99 % of accuracy. This model can be considered a new no-reference metric to infer the perceived quality, applicable in many contexts, but essentially as a tool for service providers to estimate the rank customers may give to a content.

Journal ArticleDOI
TL;DR: Although dealing with a particular case of chaotic system, the paper contains the necessary elements so that the overall procedure can be applied to other chaotic maps (e.g., tent map).
Abstract: The paper presents a new approach to generating enciphering sequences useful in information protection, with an illustration on images. The procedure is both theoretically and experimentally supported by combining elements derived from the running-key cipher, information theory and statistics. The enciphering key generator is based on the logistic map, and its theoretical properties are demonstrated by statistical tests. The new enciphering sequences comply with the fair coin model, and the randomly chosen initial conditions of the logistic map (defining the enciphering sequence) can be part of the secret key. Although dealing with a particular case of chaotic system, the paper contains the necessary elements so that the overall procedure can be applied to other chaotic maps (e.g., tent map).

Journal ArticleDOI
TL;DR: A super-resolution method for depth maps that exploits the side information from a standard color camera and uses a segmented version of the high-resolution color image acquired by the color camera in order to identify the main objects in the scene and a novel surface prediction scheme to interpolate the depth samples provided by the ToF camera.
Abstract: The extraction of depth information associated to dynamic scenes is an intriguing topic, because of its perspective role in many applications, including free viewpoint and 3D video systems. Time-of-flight (ToF) range cameras allow for the acquisition of depth maps at video rate, but they are characterized by a limited resolution, specially if compared with standard color cameras. This paper presents a super-resolution method for depth maps that exploits the side information from a standard color camera: the proposed method uses a segmented version of the high-resolution color image acquired by the color camera in order to identify the main objects in the scene and a novel surface prediction scheme in order to interpolate the depth samples provided by the ToF camera. Effective solutions are provided for critical issues such as the joint calibration between the two devices and the unreliability of the acquired data. Experimental results on both synthetic and real-world scenes have shown how the proposed method allows to obtain a more accurate interpolation with respect to standard interpolation approaches and state-of-the-art joint depth and color interpolation schemes.

Journal ArticleDOI
TL;DR: It was established that for a fixed size of the mark, a hybrid watermark insertion performed into a new disparity map representation is the only solution jointly featuring imperceptibility, robustness against the three classes of attacks, and nonprohibitive computational cost.
Abstract: Despite the sound theoretical, methodological, and experimental background inherited from 2D video, the stereoscopic video watermarking imposed itself as an open research topic. Paving the way towards practical deployment of such copyright protection mechanisms, the present paper is structured as a comparative study on the main classes of 2D watermarking methods (spread spectrum, side information, hybrid) and on their related optimal stereoscopic insertion domains (view or disparity based). The performances are evaluated in terms of transparency, robustness, and computational cost. First, the watermarked content transparency is assessed by both subjective protocols (according to ITU-R BT 500-12 and BT 1438 recommendations) and objective quality measures (five metrics based on differences between pixels and on correlation). Secondly, the robustness is objectively expressed by means of the watermark detection bit error rate against several classes of attacks, such as linear and nonlinear filtering, compression, and geometric transformations. Thirdly, the computational cost is estimated for each processing step involved in the watermarking chain. All the quantitative results are obtained out of processing two corpora of stereoscopic visual content: (1) the 3DLive corpus, summing up about 2 h of 3D TV content captured by French professionals, and (2) the MPEG 3D video reference corpus, composed of 17 min provided by both academic communities and industrials. It was thus established that for a fixed size of the mark, a hybrid watermark insertion performed into a new disparity map representation is the only solution jointly featuring imperceptibility (according to the subjective tests), robustness against the three classes of attacks, and nonprohibitive computational cost.

Journal ArticleDOI
TL;DR: To improve the power amplifier (PA) energy efficiency, a polarization–amplitude–phase modulation (PAPM) scheme in wireless communication is proposed, and the simulation results show that PAPM can improve the PA energy efficiency significantly.
Abstract: To improve the power amplifier (PA) energy efficiency, a polarization–amplitude–phase modulation (PAPM) scheme in wireless communication is proposed. The proposed scheme introduces the signal’s polarization state (PS), amplitude, and phase as the information-bearing parameters. Thus, the data rate can be further enhanced on the basis of the traditional amplitude–phase modulation. Also, since the transmitted signal’s PS completely manipulated by orthogonally dual-polarized antennas is unaffected by PA, PAPM can let PA work in its nonlinear region to acquire high PA efficiency. To further optimize the PA energy efficiency based on PAPM, a constrained optimization problem regarding the output back-off value and the ratio between the data carried by the PS and the amplitude–phase is formulated, and the distribution of the optimum solutions is presented. The simulation results show that PAPM can improve the PA energy efficiency significantly.