scispace - formally typeset
Search or ask a question

Showing papers in "Annales Des Télécommunications in 2020"


Journal ArticleDOI
TL;DR: Two novel lightweight networks are proposed that can obtain higher recognition precision while preserving less trainable parameters in the models and can be useful when deploying deep convolutional neural networks (CNNs) on mobile embedded devices.
Abstract: Deeper neural networks have achieved great results in the field of computer vision and have been successfully applied to tasks such as traffic sign recognition. However, as traffic sign recognition systems are often deployed in resource-constrained environments, it is critical for the network design to be slim and accurate in these instances. Accordingly, in this paper, we propose two novel lightweight networks that can obtain higher recognition precision while preserving less trainable parameters in the models. Knowledge distillation transfers the knowledge in a trained model, called the teacher network, to a smaller model, called the student network. Moreover, to improve the accuracy of traffic sign recognition, we also implement a new module in our teacher network that combines two streams of feature channels with dense connectivity. To enable easy deployment on mobile devices, our student network is a simple end-to-end architecture containing five convolutional layers and a fully connected layer. Furthermore, by referring to the values of batch normalization (BN) scaling factors towards zero to identify insignificant channels, we prune redundant channels from the student network, yielding a compact model with accuracy comparable to that of more complex models. Our teacher network exhibited an accuracy rate of 93.16% when trained and tested on the CIFAR-10 general dataset. Using the knowledge of our teacher network, we train the student network on the GTSRB and BTSC traffic sign datasets. Thus, our student model uses only 0.8 million parameters while still achieving accuracy of 99.61% and 99.13% respectively on both datasets. All experimental results show that our lightweight networks can be useful when deploying deep convolutional neural networks (CNNs) on mobile embedded devices.

158 citations


Journal ArticleDOI
TL;DR: The goal of this paper is to present a simple heuristic to detect the presence of selfish mining attack (and variants) in blockchain networks that use the proof-of-work (PoW) consensus algorithm.
Abstract: The blockchain technology emerged in 2008 as a distributed peer to peer network structure, capable of ensuring security for transactions made using the Bitcoin digital currency, without the need for third party intermediaries to validate them. Although its beginning was linked to cryptocurrencies, its use has diversified over the recent years. There are various projects using the blockchain technology to perform document validation, electronic voting, tokenization of non-perishable goods, and many others. With its increasing use, concern arises with possible attacks that could threaten the integrity of the consensus of the chain. One of the well-known attacks to the blockchain consensus mechanism is the selfish mining attack, in which malicious nodes can deflect their behavior from the standard pattern by not immediately disclosing their newly mined blocks. This malicious behavior can result in a disproportionate share of rewards for those nodes, especially if they have a significant processing power. The goal of this paper is to present a simple heuristic to detect the presence of selfish mining attack (and variants) in blockchain networks that use the proof-of-work (PoW) consensus algorithm. The proposal is to signal when the blockchain fork height deviates from the standard, indicating when the network is under the influence of such attacks.

49 citations


Journal ArticleDOI
TL;DR: A comprehensive review of various data representation methods, and the different objectives of Internet traffic classification and obfuscation techniques, largely considering the ML-based solutions.
Abstract: Traffic classification acquired the interest of the Internet community early on Different approaches have been proposed to classify Internet traffic to manage both security and Quality of Service (QoS) However, traditional classification approaches consisting of modifying the Transmission Control Protocol/Internet Protocol (TCP/IP) scheme have not been adopted due to their complex management In addition, port-based methods and deep packet inspection have limitations in dealing with new traffic characteristics (eg, dynamic port allocation, tunneling, encryption) Conversely, machine learning (ML) solutions effectively classify traffic down to the device type and specific user action Another research direction aims to anonymize Internet traffic and thwart classification to maintain user privacy Existing traffic surveys focus on classification and do not consider anonymization Here, we review the Internet traffic classification and obfuscation techniques, largely considering the ML-based solutions In addition, this paper presents a comprehensive review of various data representation methods, and the different objectives of Internet traffic classification Finally, we present the key findings, limitations, and recommendations for future research

46 citations


Journal ArticleDOI
TL;DR: This paper investigates LoRa, a low-power technology offering large coverage, but low transmission rates, and studies the propagation of LoRa signals in forest, urban, and suburban vehicular environments.
Abstract: Sensing is an activity of paramount importance for smart cities. The coverage of large areas based on reduced infrastructure and low energy consumption is desirable. In this context, Low Power Wide Area Network (LPWAN) plays an important role. In this paper, we investigate LoRa, a low-power technology offering large coverage, but low transmission rates. Radio range and data rate are tunable by using different spreading factors and coding rates, which are configuration parameters of the LoRa physical layer. LoRa can cover large areas but variations in the environment affect link quality. This work studies the propagation of LoRa signals in forest, urban, and suburban vehicular environments. Besides being environments with variable propagation conditions, we evaluate scenarios with node mobility. To characterize the communication link, we mainly use the Received Signal Strength Indicator (RSSI), Signal to Noise Ratio (SNR), and Packet Delivery Ratio (PDR). As for node mobility, speeds are chosen according to prospective applications. Our results show that the link reaches up to 250 m in the forest scenario, while in the vehicular scenario it reaches up to 2 km. In contrast, in scenarios with high-density buildings and human activity, the maximum range of the link reaches up to 200 m in the urban scenario.

38 citations


Journal ArticleDOI
TL;DR: This paper surveys the main consensus mechanisms on blockchain solutions, and it highlights the properties of each one, differentiate both deterministic and probabilistic consensus mechanisms, and highlights coordination solutions that facilitate the data distribution on the blockchain, without the need for a sophisticated consensus mechanism.
Abstract: Blockchain is a disruptive technology that relies on the distributed nature of the peer-to-peer network while performing an agreement, or consensus, a mechanism to achieve an immutable, global, and consistent registry of all transactions. Thus, a key challenge in developing blockchain solutions is to design the consensus mechanism properly. As a consequence of being a distributed application, any consensus mechanism is restricted to offer two of three properties: consistency, availability, and partition tolerance. In this paper, we survey the main consensus mechanisms on blockchain solutions, and we highlight the properties of each one. Moreover, we differentiate both deterministic and probabilistic consensus mechanisms, and we highlight coordination solutions that facilitate the data distribution on the blockchain, without the need for a sophisticated consensus mechanism.

37 citations


Journal ArticleDOI
TL;DR: This paper proposes a protocol through which all treatment teams involved in the emergency care can securely decrypt relevant data from the patient's EMR and add new information about the patient’s status and presents some initial experimental results.
Abstract: In emergency care, fast and efficient treatment is vital. The availability of Electronic Medical Records (EMR) allows healthcare professionals to access a patient’s data promptly, which facilitates the decision-making process and saves time by not repeating medical procedures. Unfortunately, the complete EMR of a patient is often not available during an emergency situation to all treatment teams. Cloud services emerge as a promising solution to this problem by allowing ubiquitous access to information. However, EMR storage and sharing through clouds raise several concerns about security and privacy. To this end, we propose a protocol through which all treatment teams involved in the emergency care can securely decrypt relevant data from the patient’s EMR and add new information about the patient’s status. Furthermore, our protocol ensures that treatment teams will only access the patient’s EMR for the period during which the patient is under their care. Finally, we present a formal security analysis of our protocol and some initial experimental results.

22 citations


Journal ArticleDOI
TL;DR: This study concludes the selection guidelines for choosing the optimal values of N and the maximum number of DRX cycles and mathematically investigates the switching technique in DRX mechanism using the M[X/G/1 vacation queue system with N-policy.
Abstract: Power saving and Quality of Service (QoS) are the two significant aspects of Long Term Evolution-Advanced (LTE-A) networks. DRX (“Discontinuous Reception”) is a mechanism, commonly exercised to enhance the power saving competency of a User Equipment (UE) in LTE-A networks. In this paper, based on the kind of traffic running at the UE, a new appliance is proposed to switch the DRX mechanism from the power active state to the power saving state and vice versa. We mathematically investigate this switching technique in DRX mechanism using the M[X]/G/1 vacation queue system with N-policy. Various performance and energy metrics are obtained and examined numerically. Further, the optimal value of N as well as the maximum number of DRX cycles, are computed to obtain the minimal amount of power consumption. This study concludes the selection guidelines for choosing the optimal values of N and the maximum number of DRX cycles.

16 citations


Journal ArticleDOI
TL;DR: This research work conducts a thorough investigation, analysis, and comparison of the performance of most common software applications used for simulating and modeling the energy consumption in green building, and subsequently, the best application is recognized based on unified selection criteria.
Abstract: The main goal of green building is to provide comfortable life for its residents, while encountering the negative impacts on the surrounding environment. This goal can be achieved by applying effective methodologies throughout the entire life cycle of the building and maintaining an efficient usage of the available energy resources. As part of “building information modeling (BIM),” there are numerous software simulation applications that can be used for analyzing, and modeling energy consumption in all stages of green building, starting from the initial stages of planning and design, up to the final stages of operation and maintenance. In this research work, we conduct a thorough investigation, analysis, and comparison of the performance of most common software applications used for simulating and modeling the energy consumption in green building, and subsequently, the best application is recognized based on unified selection criteria, which include various sets of design parameters and operating conditions.

15 citations


Journal ArticleDOI
TL;DR: MineCap is proposed, a dynamic online mechanism for detecting and blocking covert cryptocurrency mining flows, using machine learning on software-defined networking and a learning technique called super incremental learning, a variant of the super learner applied to online learning.
Abstract: Covert mining of cryptocurrency implies the use of valuable computing resources and high energy consumption. In this paper, we propose MineCap, a dynamic online mechanism for detecting and blocking covert cryptocurrency mining flows, using machine learning on software-defined networking. The proposed mechanism relies on Spark Streaming for online processing of network flows, and, when identifying a mining flow, it requests the flow blocking to the network controller. We also propose a learning technique called super incremental learning, a variant of the super learner applied to online learning, which takes the classification probabilities of an ensemble of classifiers as features for an incremental learning classifier. Hence, we design an accurate mechanism to classify mining flows that learn with incoming data with an average of 98% accuracy, 99% precision, 97% sensitivity, and 99.9% specificity and avoid concept drift–related issues.

13 citations


Journal ArticleDOI
TL;DR: The results indicate that for a mainly rural environment, when operating at a similar sub-GHz frequency band, NB-IoT outperforms LoRa due to the directivity associate with directional antennas which provide a better coverage for devices which are far from BS but near the main beam.
Abstract: In this work, we resort to computer simulations to compare the coverage of long range (LoRa) and narrowband (NB)-IoT in two different realistic scenarios of southern Brazil, encompassing an overall area of 8182.6 km2. The first scenario is predominantly rural with a few base stations (BSs) while the other scenario corresponds to a mostly urban area with high density of BSs. Our analysis, which adopts the actual position and parameters of the BSs of a given operator, also takes into account the digital elevation model (DEM) of the environments in order to calculate the path loss, following a realistic propagation model from 3GPP. Our results indicate that for a mainly rural environment, when operating at a similar sub-GHz frequency band, NB-IoT outperforms LoRa due to the directivity associate with directional antennas which provide a better coverage for devices which are far from BS but near the main beam. However, LoRa presents a better coverage, regardless of the site deployment, when the NB-IoT is considered to operate in the 1900-MHz band.

13 citations


Journal ArticleDOI
TL;DR: The proposed pipeline is suitable for this task as other habits rather than just going from home to work and vice versa were found and the method can be used for understanding person behavior and creating their profiles revealing a panorama of human mobility patterns from raw mobility data.
Abstract: Human mobility patterns are associated with many aspects of our life. With the increase of the popularity and pervasiveness of smartphones and portable devices, the Internet of Things (IoT) is turning into a permanent part of our daily routines. Positioning technologies that serve these devices such as the cellular antenna (GSM networks), global navigation satellite systems (GPS), and more recently the WiFi positioning system (WPS) provide large amounts of spatio-temporal data in a continuous way (data streams). In order to understand human behavior, the detection of important places and the movements between these places is a fundamental task. That said, the proposal of this work is a method for discovering user habits over mobility data without any a priori or external knowledge. Our approach extends a density-based clustering method for spatio-temporal data to identify meaningful places the individuals’ visit. On top of that, a Gaussian mixture model (GMM) is employed over movements between the visits to automatically separate the trajectories accordingly to their key identifiers that may help describe a habit. By regrouping trajectories that look alike by day of the week, length, and starting hour, we discover the individual’s habits. The evaluation of the proposed method is made over three real-world datasets. One dataset contains high-density GPS data and the others use GSM mobile phone data with 15-min sampling rate and Google Location History data with a variable sampling rate. The results show that the proposed pipeline is suitable for this task as other habits rather than just going from home to work and vice versa were found. This method can be used for understanding person behavior and creating their profiles revealing a panorama of human mobility patterns from raw mobility data.

Journal ArticleDOI
TL;DR: In this research, the feasibility of using process mining techniques for the analysis of event data from machine logs is investigated and a novel methodology, based on process mining, for profiling abnormal machine behaviour is proposed.
Abstract: Process mining is a set of techniques in the field of process management that have primarily been used to analyse business processes, for example for the optimisation of enterprise resources. In this research, the feasibility of using process mining techniques for the analysis of event data from machine logs is investigated. A novel methodology, based on process mining, for profiling abnormal machine behaviour is proposed. Firstly, a process model is constructed from the event logs of the healthy machines. This model can subsequently be used as a benchmark to compare process models of other machines by means of conformance checking. This comparison results in a set of conformance scores related to the structure of the model and other more complex aspects such as the differences in duration of particular traces, the time spent in individual events, and the relative path frequency. The identified differences can subsequently be used as a basis for root cause analysis. The proposed approach is evaluated on a real-world industrial data set from the renewable energy domain, more specifically event logs of a fleet of inverters from several solar plants.

Journal ArticleDOI
TL;DR: This research article presents an implementation of high-performance Fast Fourier Transform (FFT) and Inverse Fast Fouriers Transform (IFFT) core for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM)-based applications.
Abstract: This research article presents an implementation of high-performance Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT) core for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM)-based applications. The radix-2 butterflies are implemented using arithmetic optimization technique which reduces the number of complex multipliers involved. High-performance approximate multipliers with negligible error rate are used to eliminate the power-consuming complex multipliers in the radix-2 butterflies. The FFT/IFFT prototype using the proposed high-performance butterflies are implemented using Altera Quartus EP2C35F672C6 Field Programmable Gate Array (FPGA) which yields 40% of improved logic utilization, 33% of improved timing parameters, and 14% of improved throughput rate. The proposed optimized radix-2-based FFT/IFFT core was also implemented in 45-nm CMOS technology library, using Cadence tools, which occupies an area of 143.135 mm2 and consumes a power of 9.10 mW with a maximum throughput of 48.44 Gbps. Similarly, the high-performance approximate complex multiplier-based optimized FFT/IFFT core occupies an area of 64.811 mm2 and consumes a power of 6.18 mW with a maximum throughput of 76.44 Gbps.

Journal ArticleDOI
TL;DR: Evaluation results validate the achieved enhancements with a smaller HO failure rate and increased throughput and the proposed multi-layer parallel handover optimization (MPHO) is capable of handling application awareness in its decision process.
Abstract: The key idea of this paper is to a use cross-layer triggering concept in order to control the vertical handover (VHO) in heterogeneous networks. Current mobility management protocols could not handle the ever-growing quality of service (QoS) demand for connected mobile nodes (MNs). Motivated by addressed challenging problems, the proposed multi-layer parallel handover optimization (MPHO) is capable of handling application awareness in its decision process. Each layer in the protocol stack implements its own decision mechanisms in response to environmental changes. Independent decisions may lead to un-optimal operation while reacting to the same event. MPHO coordinates triggered actions using a centralized controller. MPHO utilizes dynamic attributes to reduce HO latencies and signaling overheads with a more advanced coordination logic. It contains a prediction module that utilizes MN mobility patterns. Evaluation results validate the achieved enhancements with a smaller HO failure rate and increased throughput. MPHO achieves improvements in terms of perceived quality and delay constraints.

Journal ArticleDOI
TL;DR: IoTligent devices can cope with spectrum scarcity that will occur at that time in unlicensed bands, and are submitted to very constrained conditions that are expected in the future with the growing number of IoT devices.
Abstract: This paper describes the theoretical principles and experimental results of reinforcement learning algorithms embedded into IoT devices (Internet of Things), in order to tackle the problem of radio collision mitigation in ISM unlicensed bands. Multi-armed bandit (MAB) learning algorithms are used here to improve both the IoT network capability to support the expected massive number of objects and the energetic autonomy of the IoT devices. We first illustrate the efficiency of the proposed approach in a proof-of-concept, based on USRP software radio platforms operating on real radio signals. It shows how collisions with other RF signals are diminished for IoT devices that use MAB learning. Then we describe the first implementation of such algorithms on LoRa devices operating in a real LoRaWAN network at 868 MHz. We named this solution IoTligent. IoTligent does not add neither processing overhead, so it can be run into the IoT devices, nor network overhead, so that it requires no change to LoRaWAN protocol. Real-life experiments done in a real LoRa network show that IoTligent devices' battery life can be extended by a factor of 2, in the scenarios we faced during our experiment. Finally we submit IoTligent devices to very constrained conditions that are expected in the future with the growing number of IoT devices, by generating an artificial IoT massive radio traffic in anechoic chamber. We show that IoTligent devices can cope with spectrum scarcity that will occur at that time in unlicensed bands.

Journal ArticleDOI
TL;DR: A three-dimensional (3D) Markov chain analysis for spectrum management scheme under heterogeneous licensed bands of two different licensed spectrum pools in cognitive radio ad hoc networks offers significant improvement in the performance of secondary users under heterogeneity licensed spectrum environment in a CR ad hoc network.
Abstract: Cognitive radio (CR) is a hopeful technology to sort out spectrum scarcity and underutilization problem in ad hoc networks. With the help of cognitive radio technology, unlicensed users can efficiently utilize the unused part of heterogeneous licensed spectrum. In this article, we present a three-dimensional (3D) Markov chain analysis for spectrum management scheme under heterogeneous licensed bands of two different licensed spectrum pools in cognitive radio ad hoc networks. We present the concept of interpool and intrapool spectrum handoff in the proposed model and derive blocking probability, dropping probability, non-completion probability, and throughput to estimate the performance of the secondary users under heterogeneous licensed spectrum environment. The impact of secondary users dynamic along with the primary users’ activity model on the performance measuring metrics in terms of blocking probability, dropping probability, non-completion probability, and throughput for three different cases is also investigated. The proposed model offers significant improvement in the performance of secondary users under heterogeneous licensed spectrum environment in a CR ad hoc network.

Journal ArticleDOI
TL;DR: This work analyze various approaches to quantum phase estimation, when a phase parameter characterizing a quantum process gets imprinted in a relative phase attached to a quantum state serving as a probe signal, and demonstrates a possibility of enhanced estimation performance, inaccessible classically, which is obtained via optimized quantum entanglement.
Abstract: The phase in quantum states is an essential information carrier for quantum telecommunications, signal processing, and computation. Quantum phase estimation is therefore a fundamental operation to extract and control useful information at the quantum level. Here, we analyze various approaches to quantum phase estimation, when a phase parameter characterizing a quantum process gets imprinted in a relative phase attached to a quantum state serving as a probe signal. The estimation approaches are based on standard concepts of signal processing (Fourier transform, maximum likelihood), yet operated in the quantum realm. We also exploit the Fisher information, both in its classical and its quantum forms, in order to assess the performance of each approach to quantum phase estimation. We demonstrate a possibility of enhanced estimation performance, inaccessible classically, which is obtained via optimized quantum entanglement. Beyond their significance to quantum phase estimation, the results illustrate how standard concepts of signal processing can contribute to the ongoing developments in quantum information and quantum technologies.

Journal ArticleDOI
TL;DR: This work presents a new data transmission scheme for a smart grid among the smart meter (SM), the electricity utility (EU), and the trusted authority (TA), where the EU can obtain the power consumption of each SM, but cannot get the real identity of the SM.
Abstract: In addition to changing service management, smart devices connect people and objects around them and collect data from them on and on, in order to construct the notion of a smart city. Such data produced by embedded devices and automatically transmitted over the Internet provides people with the information to make decisions. A smart grid is one of the most popular applications for a smart city. Due to the insecurity of the wireless channels, the security of data transmission in a smart grid has become a hot issue nowadays. Many schemes for data protection have been proposed, but weaknesses exist generally. We present a new data transmission scheme for a smart grid among the smart meter (SM), the electricity utility (EU), and the trusted authority (TA). The EU can obtain the power consumption of each SM, but cannot get the real identity of the SM. To keep the privacy of the user, if the consumption value is over the threshold in special time span or identity of SM is required for public affairs, TA could track the identity in time. Formal proof with random oracle model and security analysis are expressed to show the security of the proposed scheme. Via the performance and network simulation, it is easy to see that our scheme is practical for a smart city.

Journal ArticleDOI
TL;DR: This work has proposed a novel, lightweight, and energy-efficient function-based data aggregation approach for a cluster-based hierarchical WSN, and has employed a modified version of the Euclidean distance function to provide highly refined aggregated data to the base station.
Abstract: In wireless sensor networks (WSNs), data redundancy is a challenging issue that not only introduces network congestion but also consumes considerable sensor node resources. Data redundancy occurs due to the spatial and temporal correlations among the data gathered by the neighboring nodes. Data aggregation is a prominent technique that performs in-network filtering of the redundant data and accelerates knowledge extraction by eliminating the correlated data. However, most data aggregation techniques have low accuracy because they do not consider the presence of erroneous data from faulty nodes, which represents an open research challenge. To address this challenge, we have proposed a novel, lightweight, and energy-efficient function-based data aggregation approach for a cluster-based hierarchical WSN. Our proposed approach works at two levels: the node level and the cluster head level. At the node level, the data aggregation is performed using the exponential moving average (EMA), and a threshold-based mechanism is adopted to detect any outliers to improve the accuracy of data aggregation. At the cluster head level, we have employed a modified version of the Euclidean distance function to provide highly refined aggregated data to the base station. Our experimental results show that our approach reduces the communication cost, transmission cost, and energy consumption at the nodes and cluster heads and delivers highly refined, fused data to the base station.

Journal ArticleDOI
TL;DR: An ontology-supported hybrid reasoning model is presented by integrating case-based Reasoning and rule-based reasoning with implementation support for decision-makers to effectively respond in case of emergencies to show that the hybrid system approach is efficient in decision support.
Abstract: Large-scale disasters pose significant response challenges for all governmental organizations and the general public Several difficulties usually occur during the response efforts, making it important for the authorities to take timely key decisions to mitigate and recover from disastrous or emergency situations We herein present an ontology-supported hybrid reasoning model by integrating case-based reasoning and rule-based reasoning with implementation support for decision-makers to effectively respond in case of emergencies We also introduce a new hierarchically organized semantic knowledge representation model to represent the case base structure that enhances case-based reasoning to knowledge-intensive case-based reasoning In addition, we obtain experimental results on the analysis of the proposed approach in terms of the efficiency of the decision support system Hence, it seems reasonable to merge the advantages of both approaches using a hybrid model of knowledge representation The model output presents an estimation of the number of resources to be deployed if an emergency occurs The proposed approaches for both the knowledge representation structure and the inference algorithm have proved to improve the accuracy of recommendation in emergencies The results show that our hybrid system approach is efficient in decision support The ontology-supported hybrid reasoning approach is also further validated using subjective evaluation

Journal ArticleDOI
TL;DR: This work proposes density-connected cluster-based routing (DCCR) protocol, a position based density adaptive clustering oriented routing protocol that maintains the connectivity between two successive forwarders by considering different matrices like density and standard deviation of average relative velocity.
Abstract: With the development of Vehicular ad hoc networks (VANETs), intelligent transportation system is gaining more attention for providing many services. However, mobility characteristic of VANETs causes frequent route disconnection, particularly during the data delivery. Clustering is one of most efficient approaches to achieve stable structure of topology. The real-time applications need the data transmission delay time to be relatively stable. In position based routing with sufficient density in the neighborhood can achieve the above objective easily. In this work, we propose density-connected cluster-based routing (DCCR) protocol, a position based density adaptive clustering oriented routing protocol. The approach maintains the connectivity between two successive forwarders by considering different matrices like density and standard deviation of average relative velocity. The proposed protocol demonstrates improvement in the packet delivery ratio, end-to-end delay compared with existing approaches.

Journal ArticleDOI
TL;DR: This paper considered both variable-rate and constant-rate strategies for space-time code selection technique for multiple-inputs single-output systems and shows that it is possible to find BER and throughput values close to those required when using a variable- rates with optimized threshold levels.
Abstract: In this paper, the space-time code selection technique for multiple-inputs single-output systems is optimized using particle swarm optimization. We considered both variable-rate and constant-rate strategies. For a variable-rate technique, we address the problems of minimizing the bit-error rate for a given throughput objective and maximizing the throughput for a given bit-error rate objective. For a constant-rate technique, we address the problem of minimizing the bit-error rate. Results show that it is possible to find BER and throughput values close to those required when using a variable-rate technique with optimized threshold levels. For the constant-rate technique, we obtain considerable energy to noise gains when using optimized threshold levels.

Journal ArticleDOI
TL;DR: A new turbo decoder parallelization approach is proposed for x86 multi-core processors that provides both: high-throughput and low-latency performances.
Abstract: In the last few years, with the advent of a software-defined radio (SDR), the processor cores were stated to be an efficient solution to execute the physical layer components. Indeed, multi-core architectures provide both high-processing performance and flexibility, such that they are used in current base station systems instead of dedicated FPGA or ASIC devices. Currently, an extension of the SDR concept is running. Indeed, cloud platforms become attractive for the virtualization of radio access network functions. Actually, they improve the efficiency of the computational resource usage, and thus the global power efficiency. However, the implementation of a physical layer on a Cloud-RAN platform as discussed by Wubben and Paul (2016); Checko et al. (JAMA 17(1):405–426, 2015); Inc (2015); and Wubben et al. (JAMA 31(6):35–44, 2014) or FlexRAN platform as discussed by Wilson (2018); Foukas et al. (2017); Corp. (2017); Foukas et al. (2016) is a challenging task according to the drastic latency and throughput constraints as discussed by Yu et al. (2017) and Parvez (2018). Processing latencies from 10 μ s up to hundred of μ s are required for future digital communication systems. In this context, most of works about software implementations of ECC applications is based on massive frame parallelism to reach high throughput. Nonetheless, they produce unacceptable decoding latencies. In this paper, a new turbo decoder parallelization approach is proposed for x86 multi-core processors. It provides both: high-throughput and low-latency performances. In comparison with all CPU- and GPU-related works, the following results are observed: shorter processing latency, higher throughput, and lower energy consumption. Regarding to the best state-of-the-art x86 software implementations, 1.5 × to 2 × throughput improvements are reached, whereas a latency reduction of 50 × and an energy reduction of 2 × are observed.

Journal ArticleDOI
TL;DR: In this paper, the transmission of successive short packets is considered, and the authors develop tight upper bounds on the probability of erroneous synchronization, for both frames with concatenated syncword and frames with superimposed syncword.
Abstract: We consider the transmissions of successive short packets. Each of them combines information to be transmitted (codeword) with information for synchronizing the frame (syncword). For short packets, the cost of including syncwords is no longer negligible and its design requires careful optimization. In particular, a trade-off arises depending on the way the total transmit power or the total frame length is split among the syncword and the codeword. Assuming optimal finite-length codes, we develop tight upper bounds on the probability of erroneous synchronization, for both frames with concatenated syncword and frames with superimposed syncword. We use these bounds to optimize this trade-off. Simulation results show that the proposed bounds and analysis have practical relevance for short packet communication system design.

Journal ArticleDOI
TL;DR: Stream simulations for a moving user reveal that the proposed joint viewpoint-quality scalable compression scheme outperforms conventional scalable codecs such as scalable H.265 and enables a practical streaming with a better quality of experience.
Abstract: Over the last few years, holography has been emerging as an alternative to stereoscopic imaging since it provides users with the most realistic and comfortable three-dimensional (3D) experience. However, high-quality holograms enabling a free-viewpoint visualization contain tremendous amount of data. Therefore, a user willing to access to a remote hologram repository would face high downloading time, even with high speed networks. To reduce transmission time, a joint viewpoint-quality scalable compression scheme is proposed. At the encoder side, the hologram is first decomposed into a sparse set of diffracted light rays using Matching Pursuit over a Gabor atoms dictionary. Then, the atoms corresponding to a given user’s viewpoint are selected to form a sub-hologram. Finally, the pruned atoms are sorted and encoded according to their importance for the reconstructed view. The proposed approach allows a progressive decoding of the sub-hologram from the first received atom. Streaming simulations for a moving user reveal that our approach outperforms conventional scalable codecs such as scalable H.265 and enables a practical streaming with a better quality of experience.


Journal ArticleDOI
TL;DR: An evaluation of the Bi-directional Online Transfer Learning framework shows BOTL and its variants outperform the concept drift detection strategies and the existing state-of-the-art online transfer learning technique.
Abstract: Transfer learning uses knowledge learnt in source domains to aid predictions in a target domain. When source and target domains are online, they are susceptible to concept drift, which may alter the mapping of knowledge between them. Drifts in online environments can make additional information available in each domain, necessitating continuing knowledge transfer both from source to target and vice versa. To address this, we introduce the Bi-directional Online Transfer Learning (BOTL) framework, which uses knowledge learnt in each online domain to aid predictions in others. We introduce two variants of BOTL that incorporate model culling to minimise negative transfer in frameworks with high volumes of model transfer. We consider the theoretical loss of BOTL, which indicates that BOTL achieves a loss no worse than the underlying concept drift detection algorithm. We evaluate BOTL using two existing concept drift detection algorithms: RePro and ADWIN. Additionally, we present a concept drift detection algorithm, Adaptive Windowing with Proactive drift detection (AWPro), which reduces the computation and communication demands of BOTL. Empirical results are presented using two data stream generators: the drifting hyperplane emulator and the smart home heating simulator, and real-world data predicting Time To Collision (TTC) from vehicle telemetry. The evaluation shows BOTL and its variants outperform the concept drift detection strategies and the existing state-of-the-art online transfer learning technique.

Journal ArticleDOI
TL;DR: A large data stream detection and analysis distributed platform, through the use of machine learning to dimensionality reduction, allowing the detection of threats in real-time over a large volume of data, with greater precision.
Abstract: Detecting threats on the Internet is a key factor in maintaining data and information security. An intrusion detection system tries to prevent such attacks from occurring through the analysis of patterns and behavior of the data stream in the network. This paper presents a large data stream detection and analysis distributed platform, through the use of machine learning to dimensionality reduction. The system is evaluated based on three criteria: the accuracy, the number of false positives, and number of false negatives. Each classifier presented better accuracy when using 5 and 13 features, having fewer false positives and false negatives, allowing the detection of threats in real-time over a large volume of data, with greater precision.

Journal ArticleDOI
TL;DR: In this paper, a unified signal/image nonlinear filtering procedure is proposed, with fast algorithms and a data-driven automated hyperparameter tuning, based on proximal algorithms and Stein unbiased estimator principles.
Abstract: Numerous fields of nonlinear physics, very different in nature, produce signals and images that share the common feature of being essentially constituted of piecewise homogeneous phases. Analyzing signals and images from corresponding experiments to construct relevant physical interpretations thus often requires detecting such phases and estimating accurately their characteristics (borders, feature differences, …). However, situations of physical relevance often comes with low to very low signal-to-noise ratio precluding the standard use of classical linear filtering for analysis and denoising and thus calling for the design of advanced nonlinear signal/image filtering techniques. Additionally, when dealing with experimental physics signals/images, a second limitation is the large amount of data that need to be analyzed to yield accurate and relevant conclusions requiring the design of fast algorithms. The present work proposes a unified signal/image nonlinear filtering procedure, with fast algorithms and a data-driven automated hyperparameter tuning, based on proximal algorithms and Stein unbiased estimator principles. The interest and potential of these tools are illustrated at work on low-confinement solid friction signals and porous media multiphase flows.

Journal ArticleDOI
TL;DR: The FDTD scheme with ions is derived, and numerical experiments are provided to show that the effect of the ions may be significant when the ionosphere is disturbed by incident flows of γ or β rays.
Abstract: The finite-difference time-domain (FDTD) method has been used for a long time to compute the propagation of very low frequency (VLF) and low frequency (LF) radio waves in the Earth-Ionosphere waveguide. In previously published FDTD schemes, only the electronic density of the ionosphere was accounted for, since in usual natural conditions the effect of the ion density can be neglected. In the present paper, the FDTD scheme is extended to the case where one or several ion species must be accounted for, which may occur in special natural conditions or in such artificial conditions as after high altitude nuclear bursts. The conditions that must hold for the effect of the ions not to be negligible are discussed, the FDTD scheme with ions is derived, and numerical experiments are provided to show that the effect of the ions may be significant when the ionosphere is disturbed by incident flows of γ or β rays.