scispace - formally typeset
Search or ask a question
Author

Dick Carrillo Melgarejo

Bio: Dick Carrillo Melgarejo is an academic researcher from Lappeenranta University of Technology. The author has contributed to research in topics: Computer science & Wireless. The author has an hindex of 4, co-authored 12 publications receiving 34 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a novel intrusion detection system (IDS) based on the Tree-CNN hierarchical algorithm with the Soft-Root-Sign (SRS) activation function is proposed, which reduces the training time of the generated model for detecting DDoS, Infiltration, Brute Force, and Web attacks.
Abstract: Currently, with the increasing number of devices connected to the Internet, search for network vulnerabilities to attackers has increased, and protection systems have become indispensable. There are prevalent security attacks, such as the Distributed Denial of Service (DDoS), which have been causing significant damage to companies. However, through security attacks, it is possible to extract characteristics that identify the type of attack. Thus, it is essential to have fast and effective security identification models. In this work, a novel Intrusion Detection System (IDS) based on the Tree-CNN hierarchical algorithm with the Soft-Root-Sign (SRS) activation function is proposed. The model reduces the training time of the generated model for detecting DDoS, Infiltration, Brute Force, and Web attacks. For performance assessment, the model is implemented in a medium-sized company, analyzing the level of complexity of the proposed solution. Experimental results demonstrate that the proposed hierarchical model achieves a significant reduction in execution time, around 36%, and an average detection accuracy of 0.98 considering all the analyzed attacks. Therefore, the results of performance evaluation show that the proposed classifier based on Tree-CNN is of low complexity and requires less processing time and computational resources, outperforming other current IDS based on machine learning algorithms.

55 citations

Journal ArticleDOI
TL;DR: This article surveys emerging technologies related to pervasive edge computing for industrial internet-of-things (IIoT) enabled by fifth-generation (5G) and beyond communication networks and reinforces the perspective that the PEC paradigm is an extremely suitable and important deployment model for industrial communication networks.
Abstract: This article surveys emerging technologies related to pervasive edge computing (PEC) for industrial internet-of-things (IIoT) enabled by fifth-generation (5G) and beyond communication networks. PEC encompasses all devices that are capable of performing computational tasks locally, including those at the edge of the core network (edge servers co-located with 5G base stations) and in the radio access network (sensors, actuators, etc.). The main advantages of this paradigm are core network offloading (and benefits therefrom) and low latency for delay-sensitive applications (e.g., automatic control). We have reviewed the state-of-the-art in the PEC paradigm and its applications to the IIoT domain, which have been enabled by the recent developments in 5G technology. We have classified and described three important research areas related to PEC—distributed artificial intelligence methods, energy efficiency, and cyber security. We have also identified the main open challenges that must be solved to have a scalable PEC-based IIoT network that operates efficiently under different conditions. By explaining the applications, challenges, and opportunities, our paper reinforces the perspective that the PEC paradigm is an extremely suitable and important deployment model for industrial communication networks, considering the modern trend toward private industrial 5G networks with local operations and flexible management.

37 citations

Proceedings ArticleDOI
17 Mar 2020
TL;DR: An RIS-aided grant-free access scheme combined with advanced receivers is shown to be a well-suited option for uplink URLLC and the gains of this approach in terms of reliability, resource efficiency, and capacity are demonstrated.
Abstract: Reconfigurable intelligent surfaces (RISs) have been recently considered as one of the emerging technologies for future communication systems by leveraging the tuning capabilities of their reflecting elements. In this paper, we investigate the potential of an RIS-based architecture for uplink sensor data transmission in an ultra-reliable low-latency communication (URLLC) context. In particular, we propose an RIS-aided grant-free access scheme for an industrial control scenario, aiming to exploit diversity and achieve improved reliability performance. We consider two different resource allocation schemes for the uplink transmissions, i.e., dedicated and shared slot assignment, and three different receiver types, namely the zero-forcing, the minimum mean squared error (MMSE), and the MMSE-successive interference cancellation receivers. Our extensive numerical evaluation in terms of outage probability demonstrates the gains of our approach in terms of reliability, resource efficiency, and capacity and for different configurations of the RIS properties. An RIS-aided grant-free access scheme combined with advanced receivers is shown to be a well-suited option for uplink URLLC.

23 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive tutorial on a promising research field located at the frontier of two well-established domains, neurosciences and wireless communications, motivated by the ongoing efforts to define the Sixth Generation of Mobile Networks (6G).
Abstract: This paper presents the first comprehensive tutorial on a promising research field located at the frontier of two well-established domains, neurosciences and wireless communications, motivated by the ongoing efforts to define the Sixth Generation of Mobile Networks (6G). In particular, this tutorial first provides a novel integrative approach that bridges the gap between these two seemingly disparate fields. Then, we present the state-of-the-art and key challenges of these two topics. In particular, we propose a novel systematization that divides the contributions into two groups, one focused on what neurosciences will offer to future wireless technologies in terms of new applications and systems architecture ( Neurosciences for Wireless Networks ), and the other on how wireless communication theory and next-generation wireless systems can provide new ways to study the brain ( Wireless Networks for Neurosciences ). For the first group, we explain concretely how current scientific understanding of the brain would enable new applications within the context of a new type of service that we dub brain-type communications and that has more stringent requirements than human- and machine-type communication. In this regard, we expose the key requirements of brain-type communication services and discuss how future wireless networks can be equipped to deal with such services. Meanwhile, for the second group, we thoroughly explore modern communication systems paradigms, including Internet of Bio-Nano Things and wireless-integrated brain–machine interfaces, in addition to highlighting how complex systems tools can help bridging the upcoming advances of wireless technologies and applications of neurosciences. Brain-controlled vehicles are then presented as our case study to demonstrate for both groups the potential created by the convergence of neurosciences and wireless communications, probably in 6G. In summary, this tutorial is expected to provide a largely missing articulation between neurosciences and wireless communications while delineating concrete ways to move forward in such an interdisciplinary endeavor.

19 citations

Journal ArticleDOI
TL;DR: An intelligent Hello dissemination model, AI-Hello, based on reinforcement learning algorithms, that adapts the hello message interval scheme is proposed to produce a dense reward structure, and facilitating the network learning.
Abstract: The routing mechanisms in flying ad-hoc networks (FANETs) using unmanned aerial vehicles (UAVs) have been a challenging issue for many reasons, such as its high speed and different directions of use. In FANETs, the routing protocols send hello messages periodically for the maintenance of routes. However, the hello messages that are sent in the network increase the bandwidth wastage on some occasions and the excessive number of hello messages can also cause the problem of energy loss. Scarce works deal with the problem of excessive hello messages in dynamic UAVs scenarios, and treat several other problems, such as bandwidth and energy wastage simultaneously. Generally, the existing solutions configure the hello interval to an excessive long or short time period originating delay in neighbors discovery. Thus, a self-acting approach is necessary for calculating the exact number of hello messages with the aim to reduce the bandwidth wastage of the network and the energy loss; this approach needs to be low complex in terms of computational resource consumption. In order to solve this problem, an intelligent Hello dissemination model, AI-Hello, based on reinforcement learning algorithms, that adapts the hello message interval scheme is proposed to produce a dense reward structure, and facilitating the network learning. Experimental results, considering FANET dynamic scenarios of high speed range with 40 UAVs, show that the proposed method implemented in two widely adopted routing protocols (AODV and OLSR) saved 30.86% and 27.57% of the energy consumption in comparison to the original AODV and OLSR protocols, respectively. Furthermore, our proposal reached better network performance results in relation to the state-of-the-art methods that are implemented in the same protocols, considering parameters, such as routing overhead, packet delivery ratio, throughput and delay.

8 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 2021
TL;DR: A comprehensive review of emerging technologies for tackling COVID‐19 with emphasis on the features, challenges, and country of domiciliation shows that performance of the emerging technologies is not yet stable and there is a strong need for the emergence of a robust computationally intelligent model for early differential diagnosis.
Abstract: COVID-19 pandemic affects people in various ways and continues to spread globally. Researches are ongoing to develop vaccines and traditional methods of Medicine and Biology have been applied in diagnosis and treatment. Though there are success stories of recovered cases as of November 10, 2020, there are no approved treatments and vaccines for COVID-19. As the pandemic continues to spread, current measures rely on prevention, surveillance, and containment. In light of this, emerging technologies for tackling COVID-19 become inevitable. Emerging technologies including geospatial technology, artificial intelligence (AI), big data, telemedicine, blockchain, 5G technology, smart applications, Internet of Medical Things (IoMT), robotics, and additive manufacturing are substantially important for COVID-19 detecting, monitoring, diagnosing, screening, surveillance, mapping, tracking, and creating awareness. Therefore, this study aimed at providing a comprehensive review of these technologies for tackling COVID-19 with emphasis on the features, challenges, and country of domiciliation. Our results show that performance of the emerging technologies is not yet stable due to nonavailability of enough COVID-19 dataset, inconsistency in some of the dataset available, nonaggregation of the dataset due to contrasting data format, missing data, and noise. Moreover, the security and privacy of people's health information is not totally guaranteed. Thus, further research is required to strengthen the current technologies and there is a strong need for the emergence of a robust computationally intelligent model for early differential diagnosis of COVID-19.

109 citations

Journal ArticleDOI
TL;DR: In this paper, the authors design a rigorous testbed for measuring the one-way packet delays between a 5G end device via a radio access network (RAN) to a packet core with sub-microsecond precision as well as measuring the packet core delay with nanosecond precision.
Abstract: A 5G campus network is a 5G network for the users affiliated with the campus organization, e.g., an industrial campus, covering a prescribed geographical area. A 5G campus network can operate as a so-called 5G non-standalone (NSA) network (which requires 4G Long-Term Evolution (LTE) spectrum access) or as a 5G standalone (SA) network (without 4G LTE spectrum access). 5G campus networks are envisioned to enable new use cases, which require cyclic delay-sensitive industrial communication, such as robot control. We design a rigorous testbed for measuring the one-way packet delays between a 5G end device via a radio access network (RAN) to a packet core with sub-microsecond precision as well as for measuring the packet core delay with nanosecond precision. With our testbed design, we conduct detailed measurements of the one-way download (downstream, i.e., core to end device) as well as one-way upload (upstream, i.e., end device to core) packet delays and losses for both 5G SA and 5G NSA hardware and network operation. We also measure the corresponding 5G SA and 5G NSA packet core processing delays for download and upload. We find that typically 95% of the SA download packet delays are in the range from 4–10 ms, indicating a fairly wide spread of the packet delays. Also, existing packet core implementations regularly incur packet processing latencies up to 0.4 ms, with outliers above one millisecond. Our measurement results inform the further development and refinement of 5G SA and 5G NSA campus networks for industrial use cases. We make the measurement data traces publicly available as the IEEE DataPort 5G Campus Networks: Measurement Traces dataset (DOI 10.21227/xe3c-e968).

66 citations

Journal ArticleDOI
TL;DR: The end-to-end performance of the considered FD RIS-assisted network is analyzed, and the expressions for the block error rate (BLER) for all scheduling schemes are derived and the scheduling fairness of all the scheduling schemes is analyzed to study the performance-fairness trade-off.
Abstract: A reconfigurable intelligent surface (RIS)-assisted wireless communication system with non-linear energy harvesting (EH) and ultra-reliable low-latency constraints is considered for its possible applications in industrial automation. A distant data-center (DC) communicates with the multiple destination machines with the help of a full-duplex (FD) server machine (SM) and RIS. Assuming the deficiency of enough transmission power at the FD-SM, the SM is considered in the near vicinity of the destinations in the industry to forward the data received from the distant DC. The reception at SM is assisted by the RIS and a non-linear hybrid power-time splitting (PTS) based EH receiver architecture is adopted to extend the lifespan of SM, thus increasing network lifetime. The scheduling of multiple destinations is done by SM based on the considered selection criteria namely, random (RND) scheduling, absolute (ABS) channel-power-based (CPB) scheduling and normalized (NRM) CPB scheduling. The end-to-end performance of the considered FD RIS-assisted network is analyzed, and the expressions for the block error rate (BLER) for all scheduling schemes are derived. Moreover, the effects of number of RIS elements, packet size, channel uses on the system performance are analyzed for the considered ultra-reliable and low-latency communication (URLLC) network. The scheduling fairness of all the scheduling schemes is also analyzed to study the performance-fairness trade-off. The derived analytical results are verified through Monte-Carlo simulations.

43 citations

Journal ArticleDOI
TL;DR: In this paper, the average achievable rate and error probability of a reconfigurable intelligent surface (RIS) aided system is investigated for the finite blocklength (FBL) regime, and the performance loss due to the presence of phase errors arising from limited quantization levels as well as hardware impairments at the RIS elements is also discussed.
Abstract: In this paper, the average achievable rate and error probability of a reconfigurable intelligent surface (RIS) aided systems is investigated for the finite blocklength (FBL) regime. The performance loss due to the presence of phase errors arising from limited quantization levels as well as hardware impairments at the RIS elements is also discussed. First, the composite channel containing the direct path plus the product of reflected channels through the RIS is characterized. Then, the distribution of the received signal-to-noise ratio (SNR) is matched to a Gamma random variable whose parameters depend on the total number of RIS elements, phase errors and the channels' path loss. Next, by considering the FBL regime, the achievable rate expression and error probability are identified and the corresponding average rate and average error probability are elaborated based on the proposed SNR distribution. Furthermore, the impact of the presence of phase error due to either limited quantization levels or hardware impairments on the average rate and error probability is discussed. The numerical results show that Monte Carlo simulations conform to matched Gamma distribution to received SNR for sufficiently large number of RIS elements. In addition, the system reliability indicated by the tightness of the SNR distribution increases when RIS is leveraged particularly when only the reflected channel exists. This highlights the advantages of RIS-aided communications for ultra-reliable and low-latency systems. The difference between Shannon capacity and achievable rate in FBL regime is also discussed. Additionally, the required number of RIS elements to achieve a desired error probability in the FBL regime will be significantly reduced when the phase shifts are performed without error.

41 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a tabular data sampling method to solve the imbalanced learning problem, which aims to balance the normal samples and attack samples, and the proposed method achieves competitive results on three benchmark data sets.

40 citations