scispace - formally typeset
Search or ask a question

Showing papers in "Future Internet in 2020"


Journal ArticleDOI
TL;DR: The experimental results obtained by analyzing the proposed RDTIDS using the CICIDS2017 dataset and BoT-IoT dataset, attest their superiority in terms of accuracy, detection rate, false alarm rate and time overhead as compared to state of the art existing schemes.
Abstract: This paper proposes a novel intrusion detection system (IDS), named RDTIDS, for Internet-of-Things (IoT) networks. The RDTIDS combines different classifier approaches which are based on decision tree and rules-based concepts, namely, REP Tree, JRip algorithm and Forest PA. Specifically, the first and second method take as inputs features of the data set, and classify the network traffic as Attack/Benign. The third classifier uses features of the initial data set in addition to the outputs of the first and the second classifier as inputs. The experimental results obtained by analyzing the proposed IDS using the CICIDS2017 dataset and BoT-IoT dataset, attest their superiority in terms of accuracy, detection rate, false alarm rate and time overhead as compared to state of the art existing schemes.

124 citations


Journal ArticleDOI
TL;DR: The major proprietary and standards-based LPWAN technology solutions available in the marketplace are presented and these include Sigfox, LoRaWAN, Narrowband IoT, and long term evolution (LTE)-M, among others.
Abstract: Low power wide area network (LPWAN) is a promising solution for long range and low power Internet of Things (IoT) and machine to machine (M2M) communication applications. This paper focuses on defining a systematic and powerful approach of identifying the key characteristics of such applications, translating them into explicit requirements, and then deriving the associated design considerations. LPWANs are resource-constrained networks and are primarily characterized by long battery life operation, extended coverage, high capacity, and low device and deployment costs. These characteristics translate into a key set of requirements including M2M traffic management, massive capacity, energy efficiency, low power operations, extended coverage, security, and interworking. The set of corresponding design considerations is identified in terms of two categories, desired or expected ones and enhanced ones, which reflect the wide range of characteristics associated with LPWAN-based applications. Prominent design constructs include admission and user traffic management, interference management, energy saving modes of operation, lightweight media access control (MAC) protocols, accurate location identification, security coverage techniques, and flexible software re-configurability. Topological and architectural options for interconnecting LPWAN entities are discussed. The major proprietary and standards-based LPWAN technology solutions available in the marketplace are presented. These include Sigfox, LoRaWAN, Narrowband IoT (NB-IoT), and long term evolution (LTE)-M, among others. The relevance of upcoming cellular 5G technology and its complementary relationship with LPWAN technology are also discussed.

123 citations


Journal ArticleDOI
TL;DR: This work aims at providing an up-to-date survey, especially covering the prominent works from the last 3 years of the hardware architectures research for DNNs, covering the latest techniques in the field of dataflow, reconfigurability, variable bit-width, and sparsity.
Abstract: Deep Neural Networks (DNNs) are nowadays a common practice in most of the Artificial Intelligence (AI) applications. Their ability to go beyond human precision has made these networks a milestone in the history of AI. However, while on the one hand they present cutting edge performance, on the other hand they require enormous computing power. For this reason, numerous optimization techniques at the hardware and software level, and specialized architectures, have been developed to process these models with high performance and power/energy efficiency without affecting their accuracy. In the past, multiple surveys have been reported to provide an overview of different architectures and optimization techniques for efficient execution of Deep Learning (DL) algorithms. This work aims at providing an up-to-date survey, especially covering the prominent works from the last 3 years of the hardware architectures research for DNNs. In this paper, the reader will first understand what a hardware accelerator is, and what are its main components, followed by the latest techniques in the field of dataflow, reconfigurability, variable bit-width, and sparsity.

119 citations


Journal ArticleDOI
TL;DR: A secured architecture Blockchain and Fog-based Architecture Network (BFAN) for IoE applications in the smart cities that secures sensitive data with encryption, authentication, and Blockchain and ensures improved security features through Blockchain technology is presented.
Abstract: Fog computing (FC) is used to reduce the energy consumption and latency for the heterogeneous communication approaches in the smart cities’ applications of the Internet of Everything (IoE). Fog computing nodes are connected through wired or wireless medium. The goal of smart city applications is to develop the transaction relationship of real-time response applications. There are various frameworks in real-world to support the IoE in smart-cities but they face the issues like security, platform Independence, multi-application assistance, and resource management. This article is motivated from the Blockchain and Fog computing technologies and presents a secured architecture Blockchain and Fog-based Architecture Network (BFAN) for IoE applications in the smart cities. The proposed architecture secures sensitive data with encryption, authentication, and Blockchain. It assists the System-developers and Architects to deploy the applications in smart city paradigm. The goal of the proposed architecture is to reduce the latency and energy, and ensure improved security features through Blockchain technology. The simulation results demonstrate that the proposed architecture performs better than the existing frameworks for smart-cities.

101 citations


Journal ArticleDOI
TL;DR: The situation of an Austrian university regarding e-learning before and during the first three weeks of the changeover of the teaching system, using the example of Graz University of Technology (TU Graz).
Abstract: The COVID-19 crisis influenced universities worldwide in early 2020. In Austria, all universities were closed in March 2020 as a preventive measure, and meetings with over 100 people were banned and a curfew was imposed. This development also had a massive impact on teaching, which in Austria takes place largely face-to-face. In this paper we would like to describe the situation of an Austrian university regarding e-learning before and during the first three weeks of the changeover of the teaching system, using the example of Graz University of Technology (TU Graz). The authors provide insights into the internal procedures, processes and decisions of their university and present figures on the changed usage behaviour of their students and teachers. As a theoretical reference, the article uses the e-learning readiness assessment according to Alshaher (2013), which provides a framework for describing the status of the situation regarding e-learning before the crisis. The paper concludes with a description of enablers, barriers and bottlenecks from the perspective of the members of the Educational Technology department.

94 citations


Journal ArticleDOI
TL;DR: A linear programming method is applied for the allocation of financial resources to multiple IoT cybersecurity projects and a four-layer IoT cyber risk management framework is presented.
Abstract: Along with the growing threat of cyberattacks, cybersecurity has become one of the most important areas of the Internet of Things (IoT). The purpose of IoT cybersecurity is to reduce cybersecurity risk for organizations and users through the protection of IoT assets and privacy. New cybersecurity technologies and tools provide potential for better IoT security management. However, there is a lack of effective IoT cyber risk management frameworks for managers. This paper reviews IoT cybersecurity technologies and cyber risk management frameworks. Then, this paper presents a four-layer IoT cyber risk management framework. This paper also applies a linear programming method for the allocation of financial resources to multiple IoT cybersecurity projects. An illustration is provided as a proof of concept.

72 citations


Journal ArticleDOI
TL;DR: The aims of this paper are to build awareness of phishing techniques, educate individuals about these attacks, and encourage the use ofphishing prevention techniques, in addition to encouraging discourse among the professional community about this topic.
Abstract: Phishing attacks, which have existed for several decades and continue to be a major problem today, constitute a severe threat in the cyber world. Attackers are adopting multiple new and creative methods through which to conduct phishing attacks, which are growing rapidly. Therefore, there is a need to conduct a comprehensive review of past and current phishing approaches. In this paper, a review of the approaches used during phishing attacks is presented. This paper comprises a literature review, followed by a comprehensive examination of the characteristics of the existing classic, modern, and cutting-edge phishing attack techniques. The aims of this paper are to build awareness of phishing techniques, educate individuals about these attacks, and encourage the use of phishing prevention techniques, in addition to encouraging discourse among the professional community about this topic.

70 citations


Journal ArticleDOI
TL;DR: This paper offers a comprehensive survey of application layer protocol security by presenting the main challenges and findings and focuses on the most popular protocols devised in IoT environments for messaging/data sharing and for service discovery.
Abstract: IoT technologies are becoming pervasive in public and private sectors and represent presently an integral part of our daily life. The advantages offered by these technologies are frequently coupled with serious security issues that are often not properly overseen or even ignored. The IoT threat landscape is extremely wide and complex and involves a wide variety of hardware and software technologies. In this framework, the security of application layer protocols is of paramount importance since these protocols are at the basis of the communications among applications and services running on different IoT devices and on cloud/edge infrastructures. This paper offers a comprehensive survey of application layer protocol security by presenting the main challenges and findings. More specifically, the paper focuses on the most popular protocols devised in IoT environments for messaging/data sharing and for service discovery. The main threats of these protocols as well as the Common Vulnerabilities and Exposures (CVE) for their products and services are analyzed and discussed in detail. Good practices and measures that can be adopted to mitigate threats and attacks are also investigated. Our findings indicate that ensuring security at the application layer is very challenging. IoT devices are exposed to numerous security risks due to lack of appropriate security services in the protocols as well as to vulnerabilities or incorrect configuration of the products and services being deployed. Moreover, the constrained capabilities of these devices affect the types of security services that can be implemented.

67 citations


Journal ArticleDOI
TL;DR: This study attempted to explore the issue of cyberbullying by compiling a global dataset of 37,373 unique tweets from Twitter, using seven machine learning classifiers and showing the superiority of LR, which achieved a median accuracy of around 90.57%.
Abstract: The advent of social media, particularly Twitter, raises many issues due to a misunderstanding regarding the concept of freedom of speech. One of these issues is cyberbullying, which is a critical global issue that affects both individual victims and societies. Many attempts have been introduced in the literature to intervene in, prevent, or mitigate cyberbullying; however, because these attempts rely on the victims’ interactions, they are not practical. Therefore, detection of cyberbullying without the involvement of the victims is necessary. In this study, we attempted to explore this issue by compiling a global dataset of 37,373 unique tweets from Twitter. Moreover, seven machine learning classifiers were used, namely, Logistic Regression (LR), Light Gradient Boosting Machine (LGBM), Stochastic Gradient Descent (SGD), Random Forest (RF), AdaBoost (ADB), Naive Bayes (NB), and Support Vector Machine (SVM). Each of these algorithms was evaluated using accuracy, precision, recall, and F1 score as the performance metrics to determine the classifiers’ recognition rates applied to the global dataset. The experimental results show the superiority of LR, which achieved a median accuracy of around 90.57%. Among the classifiers, logistic regression achieved the best F1 score (0.928), SGD achieved the best precision (0.968), and SVM achieved the best recall (1.00).

65 citations


Journal ArticleDOI
TL;DR: The relationship between the most used social media addiction measures (i.e., the Bergen Facebook Addiction Scale—BFAS, the Ber gen Social Media Addiction Scale-BSMAS) and well-being is discussed.
Abstract: Does social media addiction impair the well-being of non-clinical individuals? Despite the Internet being able to be considered as a promoting factor for individual empowerment, previous literature suggests that the current massive availability of Information and Communication Technologies (ICT) may be dangerous for users’ well-being. This article discusses the relationship between the most used social media addiction measures (i.e., the Bergen Facebook Addiction Scale—BFAS, the Bergen Social Media Addiction Scale—BSMAS) and well-being. A systematic review considering all the publications indexed by PsycInfo, PsycArticles, PubMed, Science Direct, Sociological Abstracts, Academic Search Complete, and Google Scholar databases was performed to collect the data. Ten of 635 studies were included in the qualitative synthesis. Overall, most of the included works captured a negative but small relationship between BFAS/BSMAS and well-being, across multiple definitions and measurement.

56 citations


Journal ArticleDOI
TL;DR: The paper presents the proposed digital twin model’s multi-layers, namely, physical, communication, virtual space, data analytic and visualization, and application as well as the overlapping security layer.
Abstract: As the Internet of Things (IoT) is gaining ground and becoming increasingly popular in smart city applications such as smart energy, smart buildings, smart factories, smart transportation, smart farming, and smart healthcare, the digital twin concept is evolving as complementary to its counter physical part. While an object is on the move, its operational and surrounding environmental parameters are collected by an edge computing device for local decision. A virtual replica of such object (digital twin) is based in the cloud computing platform and hosts the real-time physical object data, 2D and 3D models, historical data, and bill of materials (BOM) for further processing, analytics, and visualization. This paper proposes an end-to-end digital twin conceptual model that represents its complementary physical object from the ground to the cloud. The paper presents the proposed digital twin model’s multi-layers, namely, physical, communication, virtual space, data analytic and visualization, and application as well as the overlapping security layer. The hardware and software technologies that are used in building such a model will be explained in detail. A use case will be presented to show how the layers collect, exchange, and process the physical object data from the ground to the cloud.

Journal ArticleDOI
TL;DR: This article presents a survey about the long-range technologies available presently as well as the technical characteristics they offer, and proposes a discussion about the energy consumption of each alternative and which one may be most adapted depending on the use case requirements and expectations.
Abstract: Wireless networks are now a part of the everyday life of many people and are used for many applications. Recently, new technologies that enable low-power and long-range communications have emerged. These technologies, in opposition to more traditional communication technologies rather defined as "short range", allow kilometer-wide wireless communications. Long-range technologies are used to form Low-Power Wide-Area Networks (LPWAN). Many LPWAN technologies are available, and they offer different performances, business models etc., answering different applications’ needs. This makes it hard to find the right tool for a specific use case. In this article, we present a survey about the long-range technologies available presently as well as the technical characteristics they offer. Then we propose a discussion about the energy consumption of each alternative and which one may be most adapted depending on the use case requirements and expectations, as well as guidelines to choose the best suited technology.

Journal ArticleDOI
TL;DR: It is shown that blockchain literature in LSCM is based around six organizational theories, namely: agency theory, information theory, institutional theory, network theory, the resource-based view and transaction cost analysis, which can be used to examine specific blockchain problems.
Abstract: Potential blockchain applications in logistics and supply chain (LSCM) have gained increasing attention within both academia and industry. However, as a field in its infancy, blockchain research often lacks theoretical foundations, and it is not clear which and to what extent organizational theories are used to investigate blockchain technology in the field of LSCM. In response, based upon a systematic literature review, this paper: (a) identifies the most relevant organizational theories used in blockchain literature in the context of LSCM; and (b) examines the content of the identified organizational theories to formulate relevant research questions for investigating blockchain technology in LSCM. Our results show that blockchain literature in LSCM is based around six organizational theories, namely: agency theory, information theory, institutional theory, network theory, the resource-based view and transaction cost analysis. We also present how these theories can be used to examine specific blockchain problems by identifying blockchain-specific research questions that are worthy of investigation.

Journal ArticleDOI
TL;DR: Results show the proposed language-independent features are successful in describing fake, satirical, and legitimate news across three different languages, with an average detection accuracy of 85.3% with RF.
Abstract: Online Social Media (OSM) have been substantially transforming the process of spreading news, improving its speed, and reducing barriers toward reaching out to a broad audience. However, OSM are very limited in providing mechanisms to check the credibility of news propagated through their structure. The majority of studies on automatic fake news detection are restricted to English documents, with few works evaluating other languages, and none comparing language-independent characteristics. Moreover, the spreading of deceptive news tends to be a worldwide problem; therefore, this work evaluates textual features that are not tied to a specific language when describing textual data for detecting news. Corpora of news written in American English, Brazilian Portuguese, and Spanish were explored to study complexity, stylometric, and psychological text features. The extracted features support the detection of fake, legitimate, and satirical news. We compared four machine learning algorithms (k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGB)) to induce the detection model. Results show our proposed language-independent features are successful in describing fake, satirical, and legitimate news across three different languages, with an average detection accuracy of 85.3% with RF.

Journal ArticleDOI
TL;DR: This paper provides comprehensive surveys of existing schemes to ensure SDN meets the quality of service (QoS) demands of various applications and cloud services and potential future research directions are identified and discussed.
Abstract: Software defined networking (SDN) is an emerging network paradigm that decouples the control plane from the data plane. The data plane is composed of forwarding elements called switches and the control plane is composed of controllers. SDN is gaining popularity from industry and academics due to its advantages such as centralized, flexible, and programmable network management. The increasing number of traffics due to the proliferation of the Internet of Thing (IoT) devices may result in two problems: (1) increased processing load of the controller, and (2) insufficient space in the switches’ flow table to accommodate the flow entries. These problems may cause undesired network behavior and unstable network performance, especially in large-scale networks. Many solutions have been proposed to improve the management of the flow table, reducing controller processing load, and mitigating security threats and vulnerabilities on the controllers and switches. This paper provides comprehensive surveys of existing schemes to ensure SDN meets the quality of service (QoS) demands of various applications and cloud services. Finally, potential future research directions are identified and discussed such as management of flow table using machine learning.

Journal ArticleDOI
TL;DR: In this article, the authors identify the most commonly used external variables in e-learning, agriculture and virtual reality applications for further validation in an elearning tool designed for EU farmers and agricultural entrepreneurs.
Abstract: In recent years information and communication technologies (ICT) have played a significant role in all aspects of modern society and have impacted socioeconomic development in sectors such as education, administration, business, medical care and agriculture. The benefits of such technologies in agriculture can be appreciated only if farmers use them. In order to predict and evaluate the adoption of these new technological tools, the technology acceptance model (TAM) can be a valid aid. This paper identifies the most commonly used external variables in e-learning, agriculture and virtual reality applications for further validation in an e-learning tool designed for EU farmers and agricultural entrepreneurs. Starting from a literature review of the technology acceptance model, the analysis based on Quality Function Deployment (QFD) shows that computer self-efficacy, individual innovativeness, computer anxiety, perceived enjoyment, social norm, content and system quality, experience and facilitating conditions are the most common determinants addressing technology acceptance. Furthermore, findings evidenced that the external variables have a different impact on the two main beliefs of the TAM Model, Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). This study is expected to bring theoretical support for academics when determining the variables to be included in TAM extensions.

Journal ArticleDOI
TL;DR: A hybrid deep learning model based on the combination of two deep learning methods CNN and LSTM is proposed for detecting SMS spam messages intended to deal with mixed text messages that are written in Arabic or English.
Abstract: Despite the rapid evolution of Internet protocol-based messaging services, SMS still remains an indisputable communication service in our lives until today. For example, several businesses consider that text messages are more effective than e-mails. This is because 82% of SMSs are read within 5 min., but consumers only open one in four e-mails they receive. The importance of SMS for mobile phone users has attracted the attention of spammers. In fact, the volume of SMS spam has increased considerably in recent years with the emergence of new security threats, such as SMiShing. In this paper, we propose a hybrid deep learning model for detecting SMS spam messages. This detection model is based on the combination of two deep learning methods CNN and LSTM. It is intended to deal with mixed text messages that are written in Arabic or English. For the comparative evaluation, we also tested other well-known machine learning algorithms. The experimental results that we present in this paper show that our CNN-LSTM model outperforms the other algorithms. It achieved a very good accuracy of 98.37%.

Journal ArticleDOI
TL;DR: To deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models.
Abstract: The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.

Journal ArticleDOI
TL;DR: This work proposes an efficient and high-performing intrusion detection system based on an unsupervised Kohonen Self-Organizing Map (SOM) network, to identify attack messages sent on a Controller Area Network (CAN) bus.
Abstract: The diffusion of embedded and portable communication devices on modern vehicles entails new security risks since in-vehicle communication protocols are still insecure and vulnerable to attacks. Increasing interest is being given to the implementation of automotive cybersecurity systems. In this work we propose an efficient and high-performing intrusion detection system based on an unsupervised Kohonen Self-Organizing Map (SOM) network, to identify attack messages sent on a Controller Area Network (CAN) bus. The SOM network found a wide range of applications in intrusion detection because of its features of high detection rate, short training time, and high versatility. We propose to extend the SOM network to intrusion detection on in-vehicle CAN buses. Many hybrid approaches were proposed to combine the SOM network with other clustering methods, such as the k-means algorithm, in order to improve the accuracy of the model. We introduced a novel distance-based procedure to integrate the SOM network with the K-means algorithm and compared it with the traditional procedure. The models were tested on a car hacking dataset concerning traffic data messages sent on a CAN bus, characterized by a large volume of traffic with a low number of features and highly imbalanced data distribution. The experimentation showed that the proposed method greatly improved detection accuracy over the traditional approach.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive analysis of some existing ML classifiers for identifying intrusions in network traffic and proposes an ensemble and adaptive classifier model composed of multiple classifiers with different learning paradigms to address the issue of the accuracy and false alarm rate in IDSs.
Abstract: Due to the extensive use of computer networks, new risks have arisen, and improving the speed and accuracy of security mechanisms has become a critical need. Although new security tools have been developed, the fast growth of malicious activities continues to be a pressing issue that creates severe threats to network security. Classical security tools such as firewalls are used as a first-line defense against security problems. However, firewalls do not entirely or perfectly eliminate intrusions. Thus, network administrators rely heavily on intrusion detection systems (IDSs) to detect such network intrusion activities. Machine learning (ML) is a practical approach to intrusion detection that, based on data, learns how to differentiate between abnormal and regular traffic. This paper provides a comprehensive analysis of some existing ML classifiers for identifying intrusions in network traffic. It also produces a new reliable dataset called GTCS (Game Theory and Cyber Security) that matches real-world criteria and can be used to assess the performance of the ML classifiers in a detailed experimental evaluation. Finally, the paper proposes an ensemble and adaptive classifier model composed of multiple classifiers with different learning paradigms to address the issue of the accuracy and false alarm rate in IDSs. Our classifiers show high precision and recall rates and use a comprehensive set of features compared to previous work.

Journal ArticleDOI
TL;DR: The usability of a news chatbot during a crisis situation is shown, employing the 2020 COVID-19 pandemic as a case study and the chatbot designed was evaluated in terms of effectively fulfilling the social responsibility function of crisis reporting.
Abstract: The use of chatbots in news media platforms, although relatively recent, offers many advantages to journalists and media professionals and, at the same time, facilitates users’ interaction with useful and timely information. This study shows the usability of a news chatbot during a crisis situation, employing the 2020 COVID-19 pandemic as a case study. The basic targets of the research are to design and implement a chatbot in a news media platform with a two-fold aim in regard to evaluation: first, the technical effort of creating a functional and robust news chatbot in a crisis situation both from the AI perspective and interoperability with other platforms, which constitutes the novelty of the approach; and second, users’ perception regarding the appropriation of this news chatbot as an alternative means of accessing existing information during a crisis situation. The chatbot designed was evaluated in terms of effectively fulfilling the social responsibility function of crisis reporting, to deliver timely and accurate information on the COVID-19 pandemic to a wide audience. In this light, this study shows the advantages of implementing chatbots in news platforms during a crisis situation, when the audience’s needs for timely and accurate information rapidly increase.

Journal ArticleDOI
TL;DR: The research background, including IoT architecture, device components, and attack surfaces, is described and state-of-the-art research on IoT device vulnerability discovery, detection, mitigation, and other related works are reviewed.
Abstract: With the prosperity of the Internet of Things (IoT) industry environment, the variety and quantity of IoT devices have grown rapidly. IoT devices have been widely used in smart homes, smart wear, smart manufacturing, smart cars, smart medical care, and many other life-related fields. With it, security vulnerabilities of IoT devices are emerging endlessly. The proliferation of security vulnerabilities will bring severe risks to users’ privacy and property. This paper first describes the research background, including IoT architecture, device components, and attack surfaces. We review state-of-the-art research on IoT device vulnerability discovery, detection, mitigation, and other related works. Then, we point out the current challenges and opportunities by evaluation. Finally, we forecast and discuss the research directions on vulnerability analysis techniques of IoT devices.

Journal ArticleDOI
TL;DR: Free-Space Optical Communication from mirrors and optical telegraphs to modern wireless systems and outline the future development directions of optical communication are discussed.
Abstract: Fast communication is of high importance. Recently, increased data demand and crowded radio frequency spectrum have become crucial issues. Free-Space Optical Communication (FSOC) has diametrically changed the way people exchange information. As an alternative to wire communication systems, it allows efficient voice, video, and data transmission using a medium like air. Due to its large bandwidth, FSOC can be used in various applications and has therefore become an important part of our everyday life. The main advantages of FSOC are a high speed, cost savings, compact structures, low power, energy efficiency, a maximal transfer capacity, and applicability. The rapid development of the high-speed connection technology allows one to reduce the repair downtime and gives the ability to quickly establish a backup network in an emergency. Unfortunately, FSOC is susceptible to disruption due to atmospheric conditions or direct sunlight. Here, we briefly discuss Free-Space Optical Communication from mirrors and optical telegraphs to modern wireless systems and outline the future development directions of optical communication.

Journal ArticleDOI
TL;DR: The objective of this study was to analyze the importance and the projection that artificial intelligence has acquired in the scientific literature in the Web of Science categories related to the field of education and show that scientific production is irregular from its beginnings in 1956 to the present.
Abstract: The social and technological changes that society is undergoing in this century are having a global influence on important aspects such as the economy, health and education. An example of this is the inclusion of artificial intelligence in the teaching–learning processes. The objective of this study was to analyze the importance and the projection that artificial intelligence has acquired in the scientific literature in the Web of Science categories related to the field of education. For this, scientific mapping of the reported documents was carried out. Different bibliometric indicators were analyzed and a word analysis was carried out. We worked with an analysis unit of 379 publications. The results show that scientific production is irregular from its beginnings in 1956 to the present. The language of greatest development is English. The most significant publication area is Education Educational Research, with conference papers as document types. The underlying organization is the Open University UK. It can be concluded that there is an evolution in artificial intelligence (AI) research in the educational field, focusing in the last years on the performance and influence of AI in the educational processes.

Journal ArticleDOI
TL;DR: A comparative analysis of different ML models and DL models on Coburg intrusion detection datasets (CIDDSs) suggests that both ML and DL methods are robust and complementary techniques as an effective network intrusion detection system.
Abstract: The development of robust anomaly-based network detection systems, which are preferred over static signal-based network intrusion, is vital for cybersecurity. The development of a flexible and dynamic security system is required to tackle the new attacks. Current intrusion detection systems (IDSs) suffer to attain both the high detection rate and low false alarm rate. To address this issue, in this paper, we propose an IDS using different machine learning (ML) and deep learning (DL) models. This paper presents a comparative analysis of different ML models and DL models on Coburg intrusion detection datasets (CIDDSs). First, we compare different ML- and DL-based models on the CIDDS dataset. Second, we propose an ensemble model that combines the best ML and DL models to achieve high-performance metrics. Finally, we benchmarked our best models with the CIC-IDS2017 dataset and compared them with state-of-the-art models. While the popular IDS datasets like KDD99 and NSL-KDD fail to represent the recent attacks and suffer from network biases, CIDDS, used in this research, encompasses labeled flow-based data in a simulated office environment with both updated attacks and normal usage. Furthermore, both accuracy and interpretability must be considered while implementing AI models. Both ML and DL models achieved an accuracy of 99% on the CIDDS dataset with a high detection rate, low false alarm rate, and relatively low training costs. Feature importance was also studied using the Classification and regression tree (CART) model. Our models performed well in 10-fold cross-validation and independent testing. CART and convolutional neural network (CNN) with embedding achieved slightly better performance on the CIC-IDS2017 dataset compared to previous models. Together, these results suggest that both ML and DL methods are robust and complementary techniques as an effective network intrusion detection system.

Journal ArticleDOI
TL;DR: This work aims to provide a comprehensive evaluation methodology of threat intelligence standards and cyber threat intelligence platforms existing in state-of-the-art and based on the selection of the most relevant candidates to establish the evaluation criteria.
Abstract: The cyber security landscape is fundamentally changing over the past years While technology is evolving and new sophisticated applications are being developed, a new threat scenario is emerging in alarming proportions Sophisticated threats with multi-vectored, multi-staged and polymorphic characteristics are performing complex attacks, making the processes of detection and mitigation far more complicated Thus, organizations were encouraged to change their traditional defense models and to use and to develop new systems with a proactive approach Such changes are necessary because the old approaches are not effective anymore to detect advanced attacks Also, the organizations are encouraged to develop the ability to respond to incidents in real-time using complex threat intelligence platforms However, since the field is growing rapidly, today Cyber Threat Intelligence concept lacks a consistent definition and a heterogeneous market has emerged, including diverse systems and tools, with different capabilities and goals This work aims to provide a comprehensive evaluation methodology of threat intelligence standards and cyber threat intelligence platforms The proposed methodology is based on the selection of the most relevant candidates to establish the evaluation criteria In addition, this work studies the Cyber Threat Intelligence ecosystem and Threat Intelligence standards and platforms existing in state-of-the-art

Journal ArticleDOI
TL;DR: It is demonstrated that facilitated appropriation, perceived usefulness and perceived ease of use, as mediators, significantly influence consumers’ attitude and behavioral intention towards IoT products and applications.
Abstract: A common managerial and theoretical concern is to know how individuals perceive Internet of Things (IoT) products and applications and how to accelerate adoption of them. The purpose of the current study is to answer, “What are the factors that define behavioral intention to adopt IoT products and applications among individuals?” An IoT adoption model was developed and tested, incorporating pull factors from two different information impact sources: technical and psychological. This study employs statistical structural equation modeling (SEM) in order to examine the conceptual IoT acceptance model. It is demonstrated that facilitated appropriation, perceived usefulness and perceived ease of use, as mediators, significantly influence consumers’ attitude and behavioral intention towards IoT products and applications. User character, cyber resilience, cognitive instrumentals, social influence and trust, all with different significance rates, exhibited an indirect effect, through the three mediators. The IoT acceptance model (IoTAM) upgrades current knowledge on consumers’ behavioral intention and equips practitioners with the knowledge needed to create successful integrated marketing tactics and communication strategies. It provides a solid base for examining multirooted models for the acceptance of newly formed technologies, as it bridges the discontinuity in migrating from information and communication technologies (ICTs) to IoT adoption studies, causing distortions to societies’ abilities to make informed decisions about IoT adoption and use.

Journal ArticleDOI
TL;DR: This work aims at giving an overview of the current state-of-the-art of the Blockchain-based systems for the Internet of Medical Things, specifically addressing the challenges of reaching user-centricity for these combined systems, and highlighting the potential future directions to follow for full ownership of data by users.
Abstract: Nowadays, there are a lot of new mobile devices that have the potential to assist healthcare professionals when working and help to increase the well-being of the people. These devices comprise the Internet of Medical Things, but it is generally difficult for healthcare institutions to meet compliance of their systems with new medical solutions efficiently. A technology that promises the sharing of data in a trust-less scenario is the Distributed Ledger Technology through its properties of decentralization, immutability, and transparency. The Blockchain and the Internet of Medical Things can be considered as at an early stage, and the implementations successfully applying the technology are not so many. Some aspects covered by these implementations are data sharing, interoperability of systems, security of devices, the opportunity of data monetization and data ownership that will be the focus of this review. This work aims at giving an overview of the current state-of-the-art of the Blockchain-based systems for the Internet of Medical Things, specifically addressing the challenges of reaching user-centricity for these combined systems, and thus highlighting the potential future directions to follow for full ownership of data by users.

Journal ArticleDOI
TL;DR: This paper proposes a method for accelerating the process of Proof of Work based on parallel mining rather than solo mining, to ensure that no more than two or more miners put the same effort into solving a specific block.
Abstract: A blockchain is a distributed ledger forming a distributed consensus on a history of transactions, and is the underlying technology for the Bitcoin cryptocurrency. Its applications are far beyond the financial sector. The transaction verification process for cryptocurrencies is much slower than traditional digital transaction systems. One approach to scalability or the speed at which transactions are processed is to design a solution that offers faster Proof of Work. In this paper, we propose a method for accelerating the process of Proof of Work based on parallel mining rather than solo mining. The goal is to ensure that no more than two or more miners put the same effort into solving a specific block. The proposed method includes a process for selection of a manager, distribution of work and a reward system. This method has been implemented in a test environment that contains all the characteristics needed to perform Proof of Work for Bitcoin and has been tested, using a variety of case scenarios, by varying the difficulty level and number of validators. Experimental evaluations were performed locally and in a cloud environment, and experimental results demonstrate the feasibility the proposed method.

Journal ArticleDOI
TL;DR: This work argues that self-organizing logic should be largely independent of the specific application deployment, and shows that this separation of concerns can be achieved through a proposed “pulverization approach”: the global system behavior of application services gets broken into smaller computational pieces that are continuously executed across the available hosts.
Abstract: Emerging cyber-physical systems, such as robot swarms, crowds of augmented people, and smart cities, require well-crafted self-organizing behavior to properly deal with dynamic environments and pervasive disturbances. However, the infrastructures providing networking and computing services to support these systems are becoming increasingly complex, layered and heterogeneous—consider the case of the edge–fog–cloud interplay. This typically hinders the application of self-organizing mechanisms and patterns, which are often designed to work on flat networks. To promote reuse of behavior and flexibility in infrastructure exploitation, we argue that self-organizing logic should be largely independent of the specific application deployment. We show that this separation of concerns can be achieved through a proposed “pulverization approach”: the global system behavior of application services gets broken into smaller computational pieces that are continuously executed across the available hosts. This model can then be instantiated in the aggregate computing framework, whereby self-organizing behavior is specified compositionally. We showcase how the proposed approach enables expressing the application logic of a self-organizing cyber-physical system in a deployment-independent fashion, and simulate its deployment on multiple heterogeneous infrastructures that include cloud, edge, and LoRaWAN network elements.