scispace - formally typeset
Search or ask a question

Showing papers in "Electronics in 2019"


Journal ArticleDOI
TL;DR: This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network and goes on to cover Convolutional Neural Network, Recurrent Neural Network (RNN), and Deep Reinforcement Learning (DRL).
Abstract: In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.

922 citations


Journal ArticleDOI
TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Abstract: Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.

813 citations


Journal ArticleDOI
TL;DR: An in-depth review of IoT privacy and security issues, including potential threats, attack types, and security setups from a healthcare viewpoint is conducted and previous well-known security models to deal with security risks are analyzed.
Abstract: The fast development of the Internet of Things (IoT) technology in recent years has supported connections of numerous smart things along with sensors and established seamless data exchange between them, so it leads to a stringy requirement for data analysis and data storage platform such as cloud computing and fog computing. Healthcare is one of the application domains in IoT that draws enormous interest from industry, the research community, and the public sector. The development of IoT and cloud computing is improving patient safety, staff satisfaction, and operational efficiency in the medical industry. This survey is conducted to analyze the latest IoT components, applications, and market trends of IoT in healthcare, as well as study current development in IoT and cloud computing-based healthcare applications since 2015. We also consider how promising technologies such as cloud computing, ambient assisted living, big data, and wearables are being applied in the healthcare industry and discover various IoT, e-health regulations and policies worldwide to determine how they assist the sustainable development of IoT and cloud computing in the healthcare industry. Moreover, an in-depth review of IoT privacy and security issues, including potential threats, attack types, and security setups from a healthcare viewpoint is conducted. Finally, this paper analyzes previous well-known security models to deal with security risks and provides trends, highlighted opportunities, and challenges for the IoT-based healthcare future development.

322 citations


Journal ArticleDOI
TL;DR: A novel drug supply chain management using Hyperledger Fabric based on blockchain technology to handle secureDrug supply chain records is proposed by conducting drug record transactions on a blockchain to create a smart healthcare ecosystem with a drugs supply chain.
Abstract: At present, in pharmacology one of the most serious problems is counterfeit drugs. The Health Research Funding organization reported that in developing countries, nearly 10–30% of the drugs are fake. Counterfeiting is not the main issue itself, but, rather, the fact that, as compared to traditional drugs, these counterfeit drugs produce different side effects to human health. According to WHO, around 30% of the total medicine sold in Africa, Asia, and Latin America is counterfeit. This is the major worldwide problem, and the situation is worse in developing countries, where one out of every 10 medicines are either fake or do not follow drug regulations. The rise of Internet pharmacies has made it more difficult to standardize drug safety. It is difficult to detect counterfeits because these drugs pass through different complex distributed networks, thus forming opportunities for counterfeits to enter the authentic supply chain. The safety of the pharmaceutical supply chain has become a major concern for public health, which is a collective process. In this paper, we propose a novel drug supply chain management using Hyperledger Fabric based on blockchain technology to handle secure drug supply chain records. The proposed system solves this problem by conducting drug record transactions on a blockchain to create a smart healthcare ecosystem with a drug supply chain. A smart contract is launched to give time-limited access to electronic drug records and also patient electronic health records. We also carried out a number of experiments in order to demonstrate the usability and efficiency of the designed platform. Finally, we used Hyperledger Caliper as a benchmarking tool to conduct the performance of the designed system in terms of transactions per second, transaction latency, and resource utilization.

240 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic review of wearable textile electrodes in physiological signal monitoring is presented, with discussions on the manufacturing of conductive textiles, metrics to assess their performance as electrodes, and an investigation of their application in the acquisition of critical biopotential signals for routine monitoring, assessment, and exploitation of cardiac (electrocardiography, ECG), neural(electroencephalography, EEG), muscular (electromyography, EMG), and ocular (electroculography, EOG) functions.
Abstract: Wearable electronics is a rapidly growing field that recently started to introduce successful commercial products into the consumer electronics market. Employment of biopotential signals in wearable systems as either biofeedbacks or control commands are expected to revolutionize many technologies including point of care health monitoring systems, rehabilitation devices, human–computer/machine interfaces (HCI/HMIs), and brain–computer interfaces (BCIs). Since electrodes are regarded as a decisive part of such products, they have been studied for almost a decade now, resulting in the emergence of textile electrodes. This study presents a systematic review of wearable textile electrodes in physiological signal monitoring, with discussions on the manufacturing of conductive textiles, metrics to assess their performance as electrodes, and an investigation of their application in the acquisition of critical biopotential signals for routine monitoring, assessment, and exploitation of cardiac (electrocardiography, ECG), neural (electroencephalography, EEG), muscular (electromyography, EMG), and ocular (electrooculography, EOG) functions.

174 citations


Journal ArticleDOI
TL;DR: BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values, which reduces execution time and is good candidates for deep learning implementations on FPGAs and ASICs due to their bitwise efficiency.
Abstract: In this work, we review Binarized Neural Networks (BNNs). BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values. With binary values, BNNs can execute computations using bitwise operations, which reduces execution time. Model sizes of BNNs are much smaller than their full precision counterparts. While the accuracy of a BNN model is generally less than full precision models, BNNs have been closing accuracy gap and are becoming more accurate on larger datasets like ImageNet. BNNs are also good candidates for deep learning implementations on FPGAs and ASICs due to their bitwise efficiency. We give a tutorial of the general BNN methodology and review various contributions, implementations and applications of BNNs.

165 citations


Journal ArticleDOI
TL;DR: A Multi-Class Combined performance metric is proposed to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters and a uniform distribution based balancing approach is developed to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset.
Abstract: The security of networked systems has become a critical universal issue that influences individuals, enterprises and governments. The rate of attacks against networked systems has increased dramatically, and the tactics used by the attackers are continuing to evolve. Intrusion detection is one of the solutions against these attacks. A common and effective approach for designing Intrusion Detection Systems (IDS) is Machine Learning. The performance of an IDS is significantly improved when the features are more discriminative and representative. This study uses two feature dimensionality reduction approaches: (i) Auto-Encoder (AE): an instance of deep learning, for dimensionality reduction, and (ii) Principle Component Analysis (PCA). The resulting low-dimensional features from both techniques are then used to build various classifiers such as Random Forest (RF), Bayesian Network, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) for designing an IDS. The experimental findings with low-dimensional features in binary and multi-class classification show better performance in terms of Detection Rate (DR), F-Measure, False Alarm Rate (FAR), and Accuracy. This research effort is able to reduce the CICIDS2017 dataset’s feature dimensions from 81 to 10, while maintaining a high accuracy of 99.6% in multi-class and binary classification. Furthermore, in this paper, we propose a Multi-Class Combined performance metric C o m b i n e d M c with respect to class distribution to compare various multi-class and binary classification systems through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset.

163 citations


Journal ArticleDOI
Renzhuo Wan, Shuping Mei, Jun Wang, Min Liu, Fan Yang 
TL;DR: In this article, Beijing PM2.5 and ISO-NE Dataset are analyzed by a novel Multivariate Temporal Convolution Network (M-TCN) model, which indicates significant improvement of prediction accuracy, robust and generalization of the model.
Abstract: Multivariable time series prediction has been widely studied in power energy, aerology, meteorology, finance, transportation, etc. Traditional modeling methods have complex patterns and are inefficient to capture long-term multivariate dependencies of data for desired forecasting accuracy. To address such concerns, various deep learning models based on Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) methods are proposed. To improve the prediction accuracy and minimize the multivariate time series data dependence for aperiodic data, in this article, Beijing PM2.5 and ISO-NE Dataset are analyzed by a novel Multivariate Temporal Convolution Network (M-TCN) model. In this model, multi-variable time series prediction is constructed as a sequence-to-sequence scenario for non-periodic datasets. The multichannel residual blocks in parallel with asymmetric structure based on deep convolution neural network is proposed. The results are compared with rich competitive algorithms of long short term memory (LSTM), convolutional LSTM (ConvLSTM), Temporal Convolution Network (TCN) and Multivariate Attention LSTM-FCN (MALSTM-FCN), which indicate significant improvement of prediction accuracy, robust and generalization of our model.

153 citations


Journal ArticleDOI
TL;DR: Concerns for the Internet of things (IoT) devices that depend on the low latency and reliable communications of URLLC are addressed, and the recent progress of 3rd Generation Partnership Project (3GPP) standardization and the implementation of UR LLC are included.
Abstract: To meet the diverse industrial and market demands, the International Telecommunication Union (ITU) has classified the fifth-generation (5G) into ultra-reliable low latency communications (URLLC), enhanced mobile broadband (eMBB), and massive machine-type communications (mMTC). Researchers conducted studies to achieve the implementation of the mentioned distributions efficiently, within the available spectrum. This paper aims to highlight the importance of URLLC in accordance with the approaching era of technology and industry requirements. While highlighting a few implementation issues of URLLC, concerns for the Internet of things (IoT) devices that depend on the low latency and reliable communications of URLLC are also addressed. In this paper, the recent progress of 3rd Generation Partnership Project (3GPP) standardization and the implementation of URLLC are included. Finally, the research areas that are open for further investigation in URLLC implementation are highlighted, and efficient implementation of URLLC is discussed.

135 citations


Journal ArticleDOI
TL;DR: In this paper, a dual-frequency Global Navigation Satellite System (GNSS) smartphone equipped with a Broadcom BCM47755 chip was launched, which can receive L1/E1/ and L5/E5 signals from GPS, Galileo, Beidou, and GLONASS (GLObal NAvigation Satellite System) satellites.
Abstract: On May 2018 the world’s first dual-frequency Global Navigation Satellite System (GNSS) smartphone produced by Xiaomi equipped with a Broadcom BCM47755 chip was launched. It is able to receive L1/E1/ and L5/E5 signals from GPS, Galileo, Beidou, and GLONASS (GLObal NAvigation Satellite System) satellites. The main aim of this work is to achieve the phone’s position by using multi-constellation, dual frequency pseudorange and carrier phase raw data collected from the smartphone. Furthermore, the availability of dual frequency raw data allows to assess the multipath performance of the device. The smartphone’s performance is compared with that of a geodetic receiver. The experiments were conducted in two different scenarios to test the smartphone under different multipath conditions. Smartphone measurements showed a lower C/N0 and higher multipath compared with those of the geodetic receiver. This produced negative effects on single-point positioning as showed by high root mean square error (RMS). The best positioning accuracy for single point was obtained with the E5 measurements with a DRMS (horizontal root mean square error) of 4.57 m. For E1/L1 frequency, the 2DRMS was 5.36 m. However, the Xiaomi Mi 8, thanks to the absence of the duty cycle, provided carrier phase measurements used for a static single frequency relative positioning with an achieved 2DRMS of 1.02 and 1.95 m in low and high multipath sites, respectively.

131 citations


Journal ArticleDOI
TL;DR: In this paper, the challenges of European variable renewable energy integration in terms of the power capacity and energy capacity of stationary storage technologies were examined, and the feasibility of the European variable renewables energy electricity generation targets and the theoretical maximum related to the 2040 scenarios were explained.
Abstract: Global electricity demand is constantly growing, making the utilization of solar and wind energy sources, which also reduces negative environmental effects, more and more important. These variable energy sources have an increasing role in the global energy mix, including generating capacity. Therefore, the need for energy storage in electricity networks is becoming increasingly important. This paper presents the challenges of European variable renewable energy integration in terms of the power capacity and energy capacity of stationary storage technologies. In this research, the sustainable transition, distributed generation, and global climate action scenarios of the European Network of Transmission System Operators for 2040 were examined. The article introduces and explains the feasibility of the European variable renewable energy electricity generation targets and the theoretical maximum related to the 2040 scenarios. It also explains the determination of the storage fractions and power capacity in a new context. The aim is to clarify whether it is possible to achieve the European variable renewable energy integration targets considering the technology-specific storage aspects. According to the results, energy storage market developments and regulations which motivate the increased use of stationary energy storage systems are of great importance for a successful European solar and wind energy integration. The paper also proves that not only the energy capacity but also the power capacity of storage systems is a key factor for the effective integration of variable renewable energy sources.

Journal ArticleDOI
TL;DR: In this article, the authors developed a mathematical model for the performance degradation of LIDAR as a function of rain-rate and incorporated this model into a simulation of an obstacle-detection system to show how it can be used to quantitatively predict the influence of rain on ADAS that use LidAR.
Abstract: While it is well known that rain may influence the performance of automotive LIDAR sensors commonly used in ADAS applications, there is a lack of quantitative analysis of this effect. In particular, there is very little published work on physically-based simulation of the influence of rain on terrestrial LIDAR performance. Additionally, there have been few quantitative studies on how rain-rate influences ADAS performance. In this work, we develop a mathematical model for the performance degradation of LIDAR as a function of rain-rate and incorporate this model into a simulation of an obstacle-detection system to show how it can be used to quantitatively predict the influence of rain on ADAS that use LIDAR.

Journal ArticleDOI
TL;DR: This work reviews some recently proposed reconfigurable antenna designs suitable for use in wireless communications such as cognitive-ratio (CR), multiple-input multiple-output (MIMO), ultra-wideband (UWB), and 4G/5G mobile terminals.
Abstract: Reconfigurable antennas play important roles in smart and adaptive systems and are the subject of many research studies. They offer several advantages such as multifunctional capabilities, minimized volume requirements, low front-end processing efforts with no need for a filtering element, good isolation, and sufficient out-of-band rejection; these make them well suited for use in wireless applications such as fourth generation (4G) and fifth generation (5G) mobile terminals. With the use of active materials such as microelectromechanical systems (MEMS), varactor or p-i-n (PIN) diodes, an antenna’s characteristics can be changed through altering the current flow on the antenna structure. If an antenna is to be reconfigurable into many different states, it needs to have an adequate number of active elements. However, a large number of high-quality active elements increases cost, and necessitates complex biasing networks and control circuitry. We review some recently proposed reconfigurable antenna designs suitable for use in wireless communications such as cognitive-ratio (CR), multiple-input multiple-output (MIMO), ultra-wideband (UWB), and 4G/5G mobile terminals. Several examples of antennas with different reconfigurability functions are analyzed and their performances are compared. Characteristics and fundamental properties of reconfigurable antennas with single and multiple reconfigurability modes are investigated.

Journal ArticleDOI
TL;DR: The binary version of HHO (BHHO) is proposed to solve the feature selection problem in classification tasks and shows the superiority of the proposed QBHHO in terms of classification performance, feature size, and fitness values compared to other algorithms.
Abstract: Harris hawk optimization (HHO) is one of the recently proposed metaheuristic algorithms that has proven to be work more effectively in several challenging optimization tasks. However, the original HHO is developed to solve the continuous optimization problems, but not to the problems with binary variables. This paper proposes the binary version of HHO (BHHO) to solve the feature selection problem in classification tasks. The proposed BHHO is equipped with an S-shaped or V-shaped transfer function to convert the continuous variable into a binary one. Moreover, another variant of HHO, namely quadratic binary Harris hawk optimization (QBHHO), is proposed to enhance the performance of BHHO. In this study, twenty-two datasets collected from the UCI machine learning repository are used to validate the performance of proposed algorithms. A comparative study is conducted to compare the effectiveness of QBHHO with other feature selection algorithms such as binary differential evolution (BDE), genetic algorithm (GA), binary multi-verse optimizer (BMVO), binary flower pollination algorithm (BFPA), and binary salp swarm algorithm (BSSA). The experimental results show the superiority of the proposed QBHHO in terms of classification performance, feature size, and fitness values compared to other algorithms.

Journal ArticleDOI
TL;DR: The most important features of the DC/DC converters along with the MPPT techniques are reviewed and analyzed and will provide a useful structure and reference point for researchers and designers working in the field of solar PV applications.
Abstract: Renewable Energy Sources (RES) showed enormous growth in the last few years. In comparison with the other RES, solar power has become the most feasible source because of its unique properties such as clean, noiseless, eco-friendly nature, etc. During the extraction of electric power, the DC–DC converters were given the prominent interest because of their extensive use in various applications. Photovoltaic (PV) systems generally suffer from less energy conversion efficiency along with improper stability and intermittent properties. Hence, there is a necessity of the Maximum power point tracking (MPPT) algorithm to ensure the maximum power available that can be harnessed from the solar PV. In this paper, the most important features of the DC/DC converters along with the MPPT techniques are reviewed and analyzed. A detailed comprehensive analysis is made on different converter topologies of both non-isolated and isolated DC/DC converters. Then, the modulation strategies, comparative performance evaluation are addressed systematically. At the end, recent advances and future trends are described briefly and considered for the next-generation converter’s design and applications. This review work will provide a useful structure and reference point on the DC/DC converters for researchers and designers working in the field of solar PV applications.

Journal ArticleDOI
TL;DR: Deep Learning (DL) and data mining techniques are used for electricity load and price forecasting and the experimental results show that the proposed models outperformed other benchmark schemes.
Abstract: Short-Term Electricity Load Forecasting (STELF) through Data Analytics (DA) is an emerging and active research area. Forecasting about electricity load and price provides future trends and patterns of consumption. There is a loss in generation and use of electricity. So, multiple strategies are used to solve the aforementioned problems. Day-ahead electricity price and load forecasting are beneficial for both suppliers and consumers. In this paper, Deep Learning (DL) and data mining techniques are used for electricity load and price forecasting. XG-Boost (XGB), Decision Tree (DT), Recursive Feature Elimination (RFE) and Random Forest (RF) are used for feature selection and feature extraction. Enhanced Convolutional Neural Network (ECNN) and Enhanced Support Vector Regression (ESVR) are used as classifiers. Grid Search (GS) is used for tuning of the parameters of classifiers to increase their performance. The risk of over-fitting is mitigated by adding multiple layers in ECNN. Finally, the proposed models are compared with different benchmark schemes for stability analysis. The performance metrics MSE, RMSE, MAE, and MAPE are used to evaluate the performance of the proposed models. The experimental results show that the proposed models outperformed other benchmark schemes. ECNN performed well with threshold 0.08 for load forecasting. While ESVR performed better with threshold value 0.15 for price forecasting. ECNN achieved almost 2% better accuracy than CNN. Furthermore, ESVR achieved almost 1% better accuracy than the existing scheme (SVR).

Journal ArticleDOI
TL;DR: This paper reviews the current state of the art on IoT architectures for ELEs and healthcare systems, with a focus on the technologies, applications, challenges, opportunities, open-source platforms, and operating systems.
Abstract: Internet of Things (IoT) is an evolution of the Internet and has been gaining increased attention from researchers in both academic and industrial environments. Successive technological enhancements make the development of intelligent systems with a high capacity for communication and data collection possible, providing several opportunities for numerous IoT applications, particularly healthcare systems. Despite all the advantages, there are still several open issues that represent the main challenges for IoT, e.g., accessibility, portability, interoperability, information security, and privacy. IoT provides important characteristics to healthcare systems, such as availability, mobility, and scalability, that offer an architectural basis for numerous high technological healthcare applications, such as real-time patient monitoring, environmental and indoor quality monitoring, and ubiquitous and pervasive information access that benefits health professionals and patients. The constant scientific innovations make it possible to develop IoT devices through countless services for sensing, data fusing, and logging capabilities that lead to several advancements for enhanced living environments (ELEs). This paper reviews the current state of the art on IoT architectures for ELEs and healthcare systems, with a focus on the technologies, applications, challenges, opportunities, open-source platforms, and operating systems. Furthermore, this document synthesizes the existing body of knowledge and identifies common threads and gaps that open up new significant and challenging future research directions.

Journal ArticleDOI
TL;DR: The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks, and shows that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques.
Abstract: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack to the end nodes. Due to the large number and diverse types of IoT devices, it is a challenging task to protect the IoT infrastructure using a traditional intrusion detection system. To protect IoT devices, a novel ensemble Hybrid Intrusion Detection System (HIDS) is proposed by combining a C5 classifier and One Class Support Vector Machine classifier. HIDS combines the advantages of Signature Intrusion Detection System (SIDS) and Anomaly-based Intrusion Detection System (AIDS). The aim of this framework is to detect both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks. Experiments show that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques.

Journal ArticleDOI
TL;DR: Internet of Things applications enabled for smart grids and smart environments, such as smart cities, smart homes, smart metering, and energy management infrastructures are analyzed to investigate the development of the EI based IoT applications.
Abstract: Energy Internet (EI) has been recently introduced as a new concept, which aims to evolve smart grids by integrating several energy forms into an extremely flexible and effective grid. In this paper, we have comprehensively analyzed Internet of Things (IoT) applications enabled for smart grids and smart environments, such as smart cities, smart homes, smart metering, and energy management infrastructures to investigate the development of the EI based IoT applications. These applications are promising key areas of the EI concept, since the IoT is considered one of the most important driving factors of the EI. Moreover, we discussed the challenges, open issues, and future research opportunities for the EI concept based on IoT applications and addressed some important research areas.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a time-optimal velocity planning method for guaranteeing comfort criteria when an explicit reference path is given, and the overall controller and planning method were verified using real-time, software-in-the-loop (SIL) environments for a realtime vehicle dynamics simulation; the performance was then compared with a typical planning approach.
Abstract: The convergence of mechanical, electrical, and advanced ICT technologies, driven by artificial intelligence and 5G vehicle-to-everything (5G-V2X) connectivity, will help to develop high-performance autonomous driving vehicles and services that are usable and convenient for self-driving passengers. Despite widespread research on self-driving, user acceptance remains an essential part of successful market penetration; this forms the motivation behind studies on human factors associated with autonomous shuttle services. We address this by providing a comfortable driving experience while not compromising safety. We focus on the accelerations and jerks of vehicles to reduce the risk of motion sickness and to improve the driving experience for passengers. Furthermore, this study proposes a time-optimal velocity planning method for guaranteeing comfort criteria when an explicit reference path is given. The overall controller and planning method were verified using real-time, software-in-the-loop (SIL) environments for a real-time vehicle dynamics simulation; the performance was then compared with a typical planning approach. The proposed optimized planning shows a relatively better performance and enables a comfortable passenger experience in a self-driving shuttle bus according to the recommended criteria.

Journal ArticleDOI
TL;DR: The LoBEMS (LoRa Building and Energy Management System) as mentioned in this paper was built with the mindset of proving a common platform that would integrate multiple vendor locked-in systems together with custom sensor devices, providing critical data in order to improve overall building efficiency.
Abstract: This work presents the efforts on optimizing energy consumption by deploying an energy management system using the current IoT component/system/platform integration trends through a layered architecture. LoBEMS (LoRa Building and Energy Management System), the proposed platform, was built with the mindset of proving a common platform that would integrate multiple vendor locked-in systems together with custom sensor devices, providing critical data in order to improve overall building efficiency. The actions that led to the energy savings were implemented with a ruleset that would control the already installed air conditioning and lighting control systems. This approach was validated in a kindergarten school during a three-year period, resulting in a publicly available dataset that is useful for future and related research. The sensors that feed environmental data to the custom energy management system are composed by a set of battery operated sensors tied to a System on Chip with a LoRa communication interface. These sensors acquire environmental data such as temperature, humidity, luminosity, air quality but also motion. An already existing energy monitoring solution was also integrated. This flexible approach can easily be deployed to any building facility, including buildings with existing solutions, without requiring any remote automation facilities. The platform includes data visualization templates that create an overall dashboard, allowing management to identify actions that lead to savings using a set of pre-defined actions or even a manual mode if desired. The integration of the multiple systems (air-conditioning, lighting and energy monitoring) is a key differentiator of the proposed solution, especially when the top energy consumers for modern buildings are cooling and heating systems. As an outcome, the evaluation of the proposed platform resulted in a 20% energy saving based on these combined energy saving actions.


Journal ArticleDOI
TL;DR: A review of charge pump (CP) topologies for the power management of Internet of Things (IoT) nodes is presented, allowing for quantitative insight into the state-of-the-art of integrated CPs.
Abstract: With the aim of providing designer guidelines for choosing the most suitable solution, according to the given design specifications, in this paper a review of charge pump (CP) topologies for the power management of Internet of Things (IoT) nodes is presented. Power management of IoT nodes represents a challenging task, especially when the output of the energy harvester is in the order of few hundreds of millivolts. In these applications, the power management section can be profitably implemented, exploiting CPs. Indeed, presently, many different CP topologies have been presented in literature. Finally, a data-driven comparison is also provided, allowing for quantitative insight into the state-of-the-art of integrated CPs.

Journal ArticleDOI
TL;DR: A queuing theory-based model is proposed for understanding the working and theoretical aspects of the blockchain and is validated using the actual statistics of two popular cryptocurrencies, Bitcoin and Ethereum, by running simulations for two months of transactions.
Abstract: In recent years, blockchains have obtained so much attention from researchers, engineers, and institutions; and the implementation of blockchains has started to revive a large number of applications ranging from e-finance, e-healthcare, smart home, Internet of Things, social security, logistics and so forth. In the literature on blockchains, it is found that most articles focused on their engineering implementation, while little attention has been devoted to the exploration of theoretical aspects of the system; however, the existing work is limited to model the mining process only. In this paper, a queuing theory-based model is proposed for understanding the working and theoretical aspects of the blockchain. We validate our proposed model using the actual statistics of two popular cryptocurrencies, Bitcoin and Ethereum, by running simulations for two months of transactions. The obtained performance measures parameters such as the Number of Transactions per block, Mining Time of Each Block, System Throughput, Memorypool count, Waiting Time in Memorypool, Number of Unconfirmed Transactions in the Whole System, Total Number of Transactions, and Number of Generated Blocks; these values are compared with actual statistics. It was found that the results gained from our proposed model are in good agreement with actual statistics. Although the simulation in this paper presents the modeling of blockchain-based cryptocurrencies only, the proposed model can be used to represent a wide range of blockchain-based systems.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss and present a framework for anxiety level recognition, which is a part of their developed cloud-based VRET system and highlight potential uses of this kind of virtual reality exposure therapy system.
Abstract: Virtual reality exposure therapy (VRET) can have a significant impact towards assessing and potentially treating various anxiety disorders. One of the main strengths of VRET systems is that they provide an opportunity for a psychologist to interact with virtual 3D environments and change therapy scenarios according to the individual patient’s needs. However, to do this efficiently the patient’s anxiety level should be tracked throughout the VRET session. Therefore, in order to fully use all advantages provided by the VRET system, a mental stress detection system is needed. The patient’s physiological signals can be collected with wearable biofeedback sensors. Signals like blood volume pressure (BVP), galvanic skin response (GSR), and skin temperature can be processed and used to train the anxiety level classification models. In this paper, we combine VRET with mental stress detection and highlight potential uses of this kind of VRET system. We discuss and present a framework for anxiety level recognition, which is a part of our developed cloud-based VRET system. Physiological signals of 30 participants were collected during VRET-based public speaking anxiety treatment sessions. The acquired data were used to train a four-level anxiety recognition model (where each level of ‘low’, ‘mild’, ‘moderate’, and ‘high’ refer to the levels of anxiety rather than to separate classes of the anxiety disorder). We achieved an 80.1% cross-subject accuracy (using leave-one-subject-out cross-validation) and 86.3% accuracy (using 10 × 10 fold cross-validation) with the signal fusion-based support vector machine (SVM) classifier.

Journal ArticleDOI
TL;DR: In this article, the authors reviewed the techniques available for matching the electric impedance of piezoelectric sensors, actuators, and transducers with their accessories like amplifiers, cables, power supply, receiver electronics and power storage.
Abstract: Any electric transmission lines involving the transfer of power or electric signal requires the matching of electric parameters with the driver, source, cable, or the receiver electronics. Proceeding with the design of electric impedance matching circuit for piezoelectric sensors, actuators, and transducers require careful consideration of the frequencies of operation, transmitter or receiver impedance, power supply or driver impedance and the impedance of the receiver electronics. This paper reviews the techniques available for matching the electric impedance of piezoelectric sensors, actuators, and transducers with their accessories like amplifiers, cables, power supply, receiver electronics and power storage. The techniques related to the design of power supply, preamplifier, cable, matching circuits for electric impedance matching with sensors, actuators, and transducers have been presented. The paper begins with the common tools, models, and material properties used for the design of electric impedance matching. Common analytical and numerical methods used to develop electric impedance matching networks have been reviewed. The role and importance of electrical impedance matching on the overall performance of the transducer system have been emphasized throughout. The paper reviews the common methods and new methods reported for electrical impedance matching for specific applications. The paper concludes with special applications and future perspectives considering the recent advancements in materials and electronics.

Journal ArticleDOI
TL;DR: Two new Maximum Power Point Tracking methods which improve the conventional Fractional Open Circuit Voltage method are proposed and it is shown that both methods can accurately estimate the maximum power point voltage, and hence improve the system efficiency.
Abstract: This paper proposes two new Maximum Power Point Tracking (MPPT) methods which improve the conventional Fractional Open Circuit Voltage (FOCV) method. The main novelty is a switched semi-pilot cell that is used for measuring the open-circuit voltage. In the first method this voltage is measured on the semi-pilot cell located at the edge of PV panel. During the measurement the semi-pilot cell is disconnected from the panel by a pair of transistors, and bypassed by a diode. In the second Semi-Pilot Panel method the open circuit voltage is measured on a pilot panel in a large PV system. The proposed methods are validated using simulations and experiments. It is shown that both methods can accurately estimate the maximum power point voltage, and hence improve the system efficiency.

Journal ArticleDOI
TL;DR: A blockchain-based medical platform using a smart contract is proposed to secure the EMR management and reveal that the proposed solution has the great potential to accelerate the development of a decentralized digital healthcare ecosystem.
Abstract: Recent advancements in information and communication technology is enabling a significant revolution in e-Health research and industry. In the case of personal medical data sharing, data security and convenience are crucial requirements to the interaction and collaboration of electronic medical record (EMR) systems. However, it’s hard for current systems to meet these requirements as they have inconsistent structures in terms of security policies and access control models. A new solution direction is essential to enhance data-accessing while regulating it with government mandates in privacy and security to ensure the accountability of the medical usage data. Blockchain seems to pave the way for revolution in the conventional healthcare industry benefiting by its unique features such as data privacy and transparency. In this paper, a blockchain-based medical platform using a smart contract is proposed to secure the EMR management. This approach provides patients a comprehensive, immutable log and easy access to their medical information across different departments within the hospital. A case study for hospital is built on a permissioned network, and a series of experimental tests are performed to demonstrate the usability and efficiency of the designed platform. Lastly, a benchmark study by leveraging various performance metrics is made and the outcomes indicate that the designed platform surpasses the ability of existing works in various aspects. The results of this work reveal that the proposed solution has the great potential to accelerate the development of a decentralized digital healthcare ecosystem.

Journal ArticleDOI
TL;DR: In this paper, the Spanish Ministerio de Economia, Industria y Competitividad under the research projects TEC2016-78028-C3-2P, TEC2017-86779C1-2-R, and TEC 2017-86879-C2-2R were funded in part by COLCIENCIAS in Colombia.
Abstract: This work was funded in part by the Spanish Ministerio de Economia, Industria y Competitividad under the research projects TEC2016-78028-C3-2-P, TEC2017-86779C1-2-R, and TEC2017-86779-C2-2-R, and by COLCIENCIAS in Colombia.

Journal ArticleDOI
TL;DR: The proposed Adaptive Thermal-Aware Routing algorithm is based on Multi-Ring Routing approach to find an alternative route in the case of increasing temperature and simulation results indicate that proposed protocol is more efficient in terms of temperature rise and throughput than existing approaches.
Abstract: The recent advancement in information technology and evolving of the (IoT) shifted the traditional medical approach to patient-oriented approach (e.g., Telemedicine/Telemonitoring). IoT permits several services including sensing, processing and communicating information with physical and bio-medical constraints. Wireless Body Area Network (WBAN) handles the issues pertaining to the medical purposes in the form of sensor nodes and connected network. The WBAN takes human physiological data as an input to subsequently monitor the patient conditions that are transferred to other IoT components for analysis. Such monitoring and analysis demand a cohesive routing approach to ensure the safe and in-time transfer of data. The temperature rise of bio-medical sensor nodes makes the entire routing operation very crucial because the temperature of implanted nodes rises and ultimately damages body tissues. This needs dispersion in data transmission among different nodes by opting various available routes while avoiding temperature rise. In this paper, we present Adaptive Thermal-Aware Routing algorithm for WBAN. The ATAR is designed to overcome the temperature rise issue of implanted bio-medical sensors nodes. The new protocol is based on Multi-Ring Routing approach to find an alternative route in the case of increasing temperature. The simulation results indicate that proposed protocol is more efficient in terms of temperature rise and throughput than existing approaches.