scispace - formally typeset
Search or ask a question

Showing papers presented at "Intelligent Systems Design and Applications in 2020"


Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors discuss about Intelligent Automation, its internal structure, evolution and importance in many useful applications (for Industry 4.0) and give a perspective on how it can change Healthcare Industry and can save millions of lives.
Abstract: Today’s in 21st century, we require Digital Transformation everywhere and want to make human life easier and longer to live. Digital Transformation cannot be accomplished by companies/ industries without the use of artificial intelligence (AI, i.e., analytics process) and Internet of Things (IoTs) together. AI and IoTs are the necessity of next decade and of many nations. On another side, some other technology like Blockchain technology and edge computing make the integration these technologies simple and faster. In near future, Digital Transformation will require more than one technology, i.e., integration of technologies will be ion trend. The word 'Intelligent Automation,' which is essentially the automation of the processes of the business (including general corporate-level processes using BPM and unique task-level processes using RPA), is therefore assisted by Artificial Intelligence’s analytics and decisions. This work discusses about Intelligent Automation, its internal structure, evolution and importance (with future work) in many useful applications (for Industry 4.0). In last, Intelligent Automation Systems has been explained for e-healthcare applications and give a perspective “How it can change Healthcare Industry and can save millions of lives.

29 citations


Book ChapterDOI
12 Dec 2020
TL;DR: Li et al. as discussed by the authors proposed optimized neural architecture search network (NASNet) for COVID-19 diagnosis, which has an accuracy, a recall and an area under the receiver operating characteristics curve (AUC) of 82.42, 78.16% and 91.00%, respectively.
Abstract: Deep learning (DL) has potential in the diagnosis of novel coronavirus disease (COVID-19). Nevertheless, the effectiveness of DL models varies when applied to different datasets and cross-datasets. This paper proposes optimized neural architecture search network (NASNet) for COVID-19 diagnosis. Two forms of NASNet models, namely NASNet-Mobile and NASNet-Large are applied to a dataset of 3411 computed tomography (CT) lung images freely available on GitHub repository. For the experimentation, 85% of the total samples are used for training, while the remaining are used for testing. The training and testing losses and the classification accuracies are varied with respect to the number of epochs. Results show that at an epoch of 15, NASNet-Mobile has an accuracy, a recall and an area under the receiver operating characteristics curve (AUC) of 82.42%, 78.16% and 91.00%, respectively. On the other hand, NASNet-Large has an accuracy, a recall and an AUC of 81.06%, 80.43% and 89.00%, respectively.

21 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this article, the authors proposed a new CNN architecture that combines several concepts including parallel convolutional layers with different filter sizes and a global average pooling layer (GAP).
Abstract: Image classification is playing a vital role in several computer vision and pattern recognition applications. Multi-class, corruptions and heterogeneous and complex shapes make the image classification task is extremely challenging. In this article, we introduce a new Convolutional Neural Network (CNN) design that combines several concepts including parallel convolutional layers with different filter sizes and a global average pooling layer (GAP). One of the deep learning limitations is overfitting. To diminish this issue, we have applied a GAP layer at the end of the mode. Different challenging benchmarks are used for evaluation. Specifically, CIFAR-10, CIFAR100, and MNIST are used in our final experiments. We showed that our model surpasses many former models evaluated on the same datasets. It has been proven the proposed model is active in phases of feature extraction and classification.

9 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this article, a NoSQL data warehouse from a data lake is proposed, which allows storing the big data collected from social networks such as Facebook, Twitter, and Youtube, and defines a set of mapping rules to integrate social media data from the data lake into the NoSQL Data Warehouse based on two NoSQL logical models.
Abstract: As more social media platforms expand through our lives, the amount of data exchanged across them has sharply upsurged. Data coming from social network sites can be immensely useful for all companies for determining customer trends and increase operational efficiency to get a competitive edge. At the same time, traditional decision support systems are unable to meet the growing needs of the modern enterprise to integrate and analyze a wide variety of data generated by social networks platforms. This emergence of large amounts of data requires new techniques of data management and data storage architectures able to find information quickly in a large volume of data. In this context, a data storage concept known under the name of data lake appeared, which refers to one of the latest technologies that were introduced to address this challenge in the last period. A data lake is a large raw data repository that stores and manages all company data in raw form before integrating them into the data warehouse. In this paper, we provide a new approach to design a NoSQL data warehouse from a data lake. More precisely, we start by introducing some of the recent literature reviews on NoSQL data warehouse design approaches. Then, we describe the main concepts of a NoSQL data lake that allows storing the big data collected from social networks such as Facebook, Twitter, and Youtube. Finally, we define a set of mapping rules to integrate social media data from the data lake into the NoSQL data warehouse based on two NoSQL logical models: column-oriented and document-oriented.

8 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors evaluated different deep learning convolutional neural network (CNN) models to classify the land cover via a multispectral Landsat data, which was collected from the north-eastern region of Egypt along the Nile Valley and Delta regions.
Abstract: One of the potential and necessary remote sensing topics is land cover classification. Using artificial intelligence algorithms in those type of applications decreases the required time of classification in a speedy and accurate manner. This paper assessing different deep convolutional neural network (CNN) to classify the land cover via a multispectral Landsat data. To evaluate deep learning convolutional models and better exploit existing deep CNNs in remote sensing image classification, we picked two of the most common current Convolutional Networks models: AlexNet and VGG-16. These two deep CNN models were first trained on the Landsat dataset, which had a limited number of images (approximately 500). The dataset images were derived manually from the Landsat 5 imagery and the features images categorized into five classes. The dataset was collected from the north-eastern region of Egypt along the Nile Valley and Delta regions. The testing classification accuracy reached 74.8% using AlexNet and 90.2% using VGG-16. Then, by using augmentation techniques, the Landsat dataset was expanded to be seven times larger than the original dataset, reaching approximately 3,500 images. On the augmented dataset, the testing classification accuracy was 90.0% using AlexNet and 94.6% using VGG-16.

6 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this article, a graphical user interface (GUI) for MultiChain has been proposed to make the MultiChain platform usable for people with non-technical backgrounds and came up with a GUI that is going to increase the usability of MultiChain.
Abstract: Blockchain is a revolutionary technology that is gradually changing the transaction structure, database system, and even communication system. Among many, MultiChain is one of the most prominent blockchains for deploying private blockchains. As the MultiChain platform is a script-based tool, it has a very challenging user experience. Using a research point of view, we have conducted some experiments to find a solution to make the MultiChain platform usable for people with non-technical backgrounds and came up with a graphical user interface (GUI) that is going to increase the usability of MultiChain. A within-subject evaluation study showed that the developed GUI system significantly increased the usability of MultiChain. With this new interface, MultiChain can be used in financial, educational, medical, and various other sectors with great ease.

4 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, formal verification has been applied to verify safety-critical systems and model checking can be used to unearth deficiencies in a system and then improve it, by applying it to an airbag system - which is one of the most common safety critical systems we use everyday.
Abstract: Safety-critical systems are systems that cannot be allowed to fail. Such systems, if they fail, may cause economic damage or even loss of life. As a result, bug-fixes and patches are routinely applied to traditional software. But such updates are rarely feasible in safety-critical systems. In this paper, we introduce how formal verification has been applied to verify safety-critical systems. We will also show how model checking can be used to unearth deficiencies in a system and then improve it, by applying it to an airbag system - which is one of the most common safety critical systems we use everyday. Finally we propose an improved design of such a system with added redundancy, that is more robust to faults.

4 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors proposed an ensemble machine learning algorithm based on Artificial Neural Network (ANN) and Genetic Algorithm (GA) deplored on Apache Spark and Kafka frameworks for discovering novel knowledge from big data repository of environmental, epidemiological, and immunological data of previous occurrences of EVD.
Abstract: In recent times, big data has become ubiquitous and can be employed to improve intelligent decision making in diverse real-life application domains such as climatology, agriculture, biomedicine, and epidemiological studies. In this research, we leverage the availability of big data in epidemiology studies to design a data analytics framework for Ebola virus disease (EVD) outbreak surveillance. The perils of an outbreak of infectious diseases such as EVD, Zika virus, acute respiratory syndrome (SARS), and human monkeypox are identical to the effects of a natural disaster such as wildfires, tsunamis, floods, and earthquakes in a community. The devastating capabilities of infectious diseases stem from their ability to strike unexpectedly, giving no elbowroom for adequate preparation. Therefore, a real-time early warning surveillance system that anticipates the emergence of infectious diseases such as EVD is required for taking proactive steps to avert an impending outbreak or reduce its impacts, rather than reacting to an outbreak. EVD is a deadly infectious disease that attacks the hosts’ blood and immune system rapidly and has caused the death of over 15,000 people in Africa. To mitigate the threats of EVD, we proposed an ensemble machine learning algorithm–a hybrid of Artificial Neural Network (ANN) and Genetic Algorithm (GA) deplored on Apache Spark and Kafka frameworks for discovering novel knowledge from big data repository of environmental, epidemiological, and immunological data of previous occurrences of EVD–using Nigeria as a case study to forecast a future outbreak of EVD in terms magnitude, timing and duration– coupled with a real-time alert to the appropriate public health authorities.

4 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this article, the authors present data preprocessing operations and visualisation techniques, carried out on the following datasets: Teaching Assistant Evaluation dataset, Statlog (Australian Credit Approval) dataset, Letter Recognition, Connectionist Bench (Sonar, Mines vs. Rocks) dataset and Poker Hand dataset.
Abstract: This paper presents data preprocessing operations and visualisation techniques, carried out on the following datasets: Teaching Assistant Evaluation dataset, Statlog (Australian Credit Approval) dataset, Letter Recognition, Connectionist Bench (Sonar, Mines vs. Rocks) dataset, and Poker Hand dataset. These datasets are from the University of California Irvine (UCI) Machine Learning Repository. Further, appropriate visualisation techniques are applied to the five selected datasets depending on the properties that are supported by the visualisation techniques used. In the end, this paper offers a template for researchers, data scientists, and other data users, in selecting the right preprocessing operations and appropriate visualisation techniques when using these datasets.

4 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this article, the authors proposed a hybrid recommender system which integrates RNN, LSTM, N-Gram, and Jaccard similarity to provide better results in terms of accuracy than the available models.
Abstract: Nowadays, the abundant availability of online courses which are powered by well-known Universities puts students in a dilemma in choosing the appropriate course according to their interests. To overcome this situation, Recommender Systems are used to make the job easy for the students to pick courses and pursue it. This paper proposes a hybrid Recommender System which helps in recommending online courses for the students based on their queries and interests. The hybrid model integrates RNN, LSTM, N-Gram, and Jaccard similarity to provide better results in terms of accuracy than the available models. Especially, the usage of RNN and LSTM paved its way for giving better accuracy with the help of Jaccard similarity. The proposed HCRDL model has achieved an average F-Measure of 96.31% and tends to perform better than the other models by attenuating the weakness of the common course recommender models.

4 citations


Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, an ontology model for climate change from a geological perspective considering socio-geological aspects is presented, where knowledge is represented as classes associated with each other by a link and the knowledge is visualized using Web VOWL and converted to owl format using Web Protege.
Abstract: Ontology modeling and visualization is quite a tedious task, but it is of utmost importance for highly specialized domains. Modeling and visualization of domains seem ordinary, but the technical aspects and variations of such domains are missed out. Such specialized domains that seem shallow needs Ontology modeling considering several technical factors. Sometimes there can be an entirely new perspective to a domain that needs to be considered when modeling ontologies. This paper presents an Ontology model for climate change from a geological perspective considering socio geological aspects. It is developed upon extensive research on the domain. The knowledge is represented as classes associated with each other by a link. The Ontology is visualized using Web VOWL and converted to owl format using Web Protege. The proposed climate Ontology has been evaluated both qualitatively and quantitatively. A reuse ratio of 0.95 is obtained.

Book ChapterDOI
12 Dec 2020
TL;DR: In this article, the authors analyzed consumers attitude towards processed good by using Natural Language Processing (NLP) techniques, particularly text analysis, and analyzed the significant issues that concern customers buying processed food in the pandemic time.
Abstract: This paper dealt with analyzing consumers attitude towards processed good by using Natural Language processing, particularly text analysis. As a research scholar in his mid-20-s, I always adore canned foods. A doctorate student’s life is indeed stressful and processed food makes it a little bit easier. I frequently purchase canned foods, canned vegetables, and my favorite Chicken nuggets from Amazon.com, but as the COVID crisis intensify, I find myself hesitant shopping processed foods online, because of the obvious safety reasons. This unprecedented time completely changed the perception of processed food for many consumers. Using data analytics techniques, this essay analyzes Customer’s sentiments about purchasing processed food during Pandemic time and hot it differs from normal time. Also, we have analyzed the significant issues that concern customers buying processed food in the pandemic time. We have collected experiences of people shared online about processed foods in two-time periods. The first period refers to consumer’s experiences about processed food shared from July 2019 to November 2019 (Just before the first COVID case was detected). The second period refers to the experiences of consumers shared from February 2020 to July 2020. The results of our analysis have indicated the fact that after the emergence of COVID-19, negative attitude towards processed foods increased to a considerable percentage. Particularly the month of April sees a massive increase in negative sentiments towards processed food. Quality of the product and safety of the purchasing experience was the two most important concerns voiced by the consumers during pandemic time.

Book ChapterDOI
12 Dec 2020
TL;DR: A transfer of learning strategy using the VGG16 architecture is proposed, while the output of the proposed architecture is further compared to the existing NVIDIA architecture.
Abstract: Over the last few years, autonomous vehicles have expanded considerably. Autonomous driving systems are getting more complex and must be tested successfully prior to implementation. We are exploring a model for high quality prediction of obstacle avoidance for autonomous vehicles based on images created by a virtual simulation platform and then using a VGG 16 deep learning technique, including transfer learning. This paper proposes a transfer of learning strategy using the VGG16 architecture, while the output of the proposed architecture is further compared to the existing NVIDIA architecture. Experimental results indicate that the VGG16 with the transfer learning architecture has surpassed other tested methods.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, a Genetic Search Wrapper-based Naive Bayes anomaly detection model (GSWNB) was proposed for fog computing environment that eliminates extraneous features to minimize time complexity as well as building an improved model that predict result with a higher accuracy using NSL-KDD dataset as benchmark dataset.
Abstract: Fog computing will provide low-latency connectivity between smartphone devices and the cloud as a complement to cloud computing. Fog devices can, however, face security related challenges as fog nodes are near to end users with restricted computing capabilities. Traditional network attacks break the fog node system. While the intrusion detection system (IDS) has been well studied in traditional networks, it may sadly be impractical to use it specifically in the fog environment. Fog nodes still produce large quantities of data and thus allowing the IDS in the fog context over big data is of the utmost importance. In order to counter some of these network attacks, a proactive security defense technology, Intrusion Detection System (IDS), can be used in the fog environment using data mining technique for network anomaly detection and network event classification attack has proven efficient and accurate. This research presents a Genetic Search Wrapper-based Naive Bayes anomaly detection model (GSWNB) in Fog Computing environment that eliminates extraneous features to minimise time complexity as well as building an improved model that predict result with a higher accuracy using NSL-KDD dataset as benchmark dataset. From the experiment, the proposed model demonstrates a higher overall performance of 99.73% accuracy, keeping the false positive rate as low as 0.006.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, a data collection problem in heterogeneous robot networks where an unmanned aerial vehicle (UAV) collects data from M robots (autonomous ground vehicles) with infinite-sized buffers as a data fusion center (DFC) is tackled.
Abstract: This work tackles a data collection problem in heterogenous robot networks where an unmanned aerial vehicle (UAV) collects data from M robots (autonomous ground vehicles) with infinite-sized buffers as a data fusion center (DFC). In each time slot, DFC choose K robots send data via K orthogonal channels. DFC knows neither buffer states of robots, nor statistics of DA process; it only has information on outcomes of previous transmission attempts. It aims to derive low-complexity algorithms achieving maximum total throughput. Whenever being scheduled, a robot can send data unless its buffer is empty. A simple algorithm, Uniforming Random Ordered Policy (UROP), is suggested. Under a broad class of DA process, it is shown to achieve near throughput-optimality through finite time horizon. Simulation result demonstrate that even with reasonable finite sized buffers, it is near throughput-optimality through finite time horizons.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, an Open-Vocabulary approach that based on character-model recognition was proposed for handwriting Arabic recognition using CRNN model with CTC beam search decoder.
Abstract: The offline Arabic text recognition is a substantial problem that has several important applications. It has attracted special emphasis and has become one of the challenging areas of research in the field of computer vision. Deep Neural Networks (DNN) algorithms provide the great performance improvement in problems of sequence recognition such as speech and handwriting recognition. This paper interests on recent Arabic handwriting text recognition researches based on DNN. Our contribution in this work is based on CRNN model with CTC beam search decoder that is used for the first time for handwriting Arabic recognition. The proposed system is an Open-Vocabulary approach that based on character-model recognition.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, a comprehensive review of the existing approaches for detecting abusive messages from social media in the Arabic language is presented, which extend from the use of traditional machine learning to the incorporation of the latest deep learning architectures.
Abstract: The pervasiveness of social networks in recent years has revolutionized the way we communicate. The chance is now opened up for every person to freely and anonymously share his thoughts, opinions and ideas in a real-time manner. However, social media platforms are not always considered as a safe environment due to the increasing propagation of abusive messages that severely impact the community as a whole. The rapid detection of abusive messages remains a challenge for social platforms not only because of the harm it may cause to the users but also because of its impact on the quality of service they provide. Furthermore, the detection task proves to be more difficult when contents are generated in a specific language known by its complexity, richness and specificities like the Arabic language. The aim of this paper is to provide a comprehensive review of the existing approaches for detecting abusive messages from social media in the Arabic language. These approaches extend from the use of traditional machine learning to the incorporation of the latest deep learning architectures. Additionally, a background on abusive messages and Arabic language specificities will be presented. Finally, challenges are described for better analysis and identification of the future directions.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, a machine learning-based approach considering both the neurological and physiological measures is proposed to evaluate the users' emotions (UX) while playing a serious game, which is simulated through an experimental study to evaluate UX of an educational serious game.
Abstract: The importance of evaluating user experience (UX) is increasing gradually and so are the varieties of methods of UX evaluation. In order to keep the interactive computing and entertaining system sustainable in business, satisfying the user needs is a must, while UX evaluation plays a vital role in this respect. Again, gaming experience is largely impacted by users’ emotions. Among various types of games, serious game is a particular type which provides some purpose along with common gaming entertainment. The objective of this research is to show how objective methods (neurological and physiological measures) can be used to infer users’ emotional experience. To attain this objective, a machine learning-based approach considering both the neurological and physiological measures is proposed to evaluate the users’ emotions (UX) while playing serious game. The proposed approach is simulated through an experimental study to evaluate UX of an educational serious game –‘Programming Hero’. The finding of the study indicates that neurological and physiological measures of UX evaluation can infer the users’ emotions in playing serious game.

Book ChapterDOI
12 Dec 2020
TL;DR: In this article, the authors compare different deep learning imaging detectors for human detection in SAR images in the presence of volcanic activity in Ecuador, and show that a slim version of the model YOLOv3, while using less computing resources and fewer parameters than the original model, still achieves comparable detection performance and is therefore more appropriate for SAR approaches with limited computing resources.
Abstract: Human casualties in natural disasters have motivated technological innovations in Search and Rescue (SAR) activities. Difficult access to places where fires, tsunamis, earthquakes, or volcanoes eruptions occur has been delaying rescue activities. Thus, technological advances have gradually been finding their purpose in aiding to identify and find the best locations to put available resources and efforts to improve rescue processes. In this scenario, the use of Unmanned Aerial Vehicles (UAV) and Computer Vision (CV) techniques can be extremely valuable for accelerating SAR activities. However, the computing capabilities of this type of aerial vehicles are scarce and time to make decisions is also relevant when determining the next steps. In this work, we compare different Deep Learning (DL) imaging detectors for human detection in SAR images. A setup with drone-mounted cameras and mobile devices for drone control and image processing is put in place in Ecuador, where volcanic activity is frequent. The main focus is on the inference time in DL learning approaches, given the dynamic environment where decisions must be fast. Results show that a slim version of the model YOLOv3, while using less computing resources and fewer parameters than the original model, still achieves comparable detection performance and is therefore more appropriate for SAR approaches with limited computing resources.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the eXtreme Gradient Boosting (XGB) algorithm was used to predict the severity of chronic kidney disease without considering serum creatinine as a predictive value.
Abstract: Chronic Kidney Disease (CKD) is one of the most neglected chronic diseases worldwide and a global public health issue. CKD impacts worldwide morbidity and mortality by other conditions such as diabetes and hypertension, and the treatment can be too costly. However, CKD could be prevented or delayed by inexpensive interventions. Once the CKD prediction is successful, we can improve the quality control in the diagnostic and treatment of chronic kidney disease. This paper proposes six different classification algorithms to predict CKD stages without considering serum creatinine as a predictive value. The results show that the eXtreme Gradient Boosting (XGB) algorithm provides higher accuracy on classification and prediction performance for determining the severity stage in CKD, reaching a similar precision level compared to another approach that considers serum creatinine in the classification model.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors summarized plan factors and communication necessities for FANETs described, examining the components concerning FANets, including challenges induced over a directional antenna.
Abstract: Ad hoc network extension from MANET (Mobile Adhoc Network) and VANET (Vehicular Adhoc Network) to FANETs (Flying Adhoc Network) has eased coverage, expansion, and real-time features in the communication system. FANETs use ranges from the army and non-combat situations to the agriculture field as well. The deployment of traditional omnidirectional antenna lacks in addressing spatial reuse. Alternatively, a directional antenna’s implementation can remove this problem by increasing network performance on those above and other specific parameters. The empanelment of directional antennas while communication could anonymously extend the communication radius within UAVs and ground connection. The beginning of the article summarizes plan factors also communication necessities for FANETs described, examining the components concerning FANETs, including challenges induced over a directional antenna. Predicaments that demand further assessment are summed, expecting to illuminate the interested researchers to work in this area.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors verify the efficiency of the SLIC together with the CNNs, using SLIC as a preprocessing technique and CNNs as a classification method, and the selected results are not motivating with the use of SLIC at the expense of using the original images.
Abstract: With the increase in the world population, it is necessary to increase agricultural production. The technology in the field aims to assist producers, agriculture with greater productivity without forgetting to care for the environment. One of the problems encountered by farmers is plant diseases, which can cause great damage to their crops. Thus, the use of automatic disease detection techniques by means of a computational method can be an alternative to solve this problem. However, the problem in using automatic techniques is the lack of data and that the use of methods to augment existing bases is a challenge. The objective of this work is to verify the efficiency of the SLIC together with the CNNs, using the SLIC as a preprocessing technique and the CNNs as a classification method. Finally, the selected results are not motivating with the use of SLIC at the expense of using the original images.

Book ChapterDOI
12 Dec 2020
TL;DR: In this article, a classification of Cooperative Advanced Driver Assistance Systems (C-ADAS) is proposed to prevent accidents and enhance the road users' safety by considering different types of situations and proposing various functionalities.
Abstract: The huge number of vehicles driving on the road can cause dangerous accidents that lead to severe impacts on human safety, vehicle damage and traffic flow deficiency. Therefore, Cooperative Advanced Driver Assistance Systems (C-ADAS) have been proposed to prevent accidents and enhance the road users’ safety. C-ADAS seem to have a considerable potential for road safety and traffic efficiency improvement, by considering different types of situations and proposing various functionalities. This fact has motivated numerous research efforts on ADAS classification. In this paper, we propose a classification that allocates the C-ADAS in four different categories on the basis of their functionalities and types of applications. Moreover, we review existing research works of C-ADAS applications, with different approaches and algorithms based on recent trends. We also discuss the reviewed works and give recommendations and research challenges to produce better and more robust C-ADAS.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors proposed two methods using the Martingale framework that are able to detect changes and minimise the noise effect in a labelled electromagnetic data set, which make some improvements over the previous approaches within the Martelingale framework.
Abstract: Existing algorithms are able to find changes in data streams but they often struggle to distinguish between a real change and noise. This fact limits the effectiveness of current algorithms. In this paper, we propose two methods using the Martingale framework that are able to detect changes and minimise the noise effect in a labelled electromagnetic data set. Results show that the proposed methods make some improvements over the previous approaches within the Martingale framework.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors compared the performance of four widely used deep learning models in terms of accuracy and processing time to determine the best one for real-time diagnostic applications, including ResNet-50, AlexNet, GoogLeNet and VGG16.
Abstract: The past decade has shown considerable growth in the field of deep learning techniques, changing the context of several areas of research. In medicine, deep learning techniques have achieved encouraging results with additional precision in the processing of various image datasets, such as brain MRI, chest X-ray, and retinal imaging. For example, many human organs can be scanned quickly and at a lower cost using X-ray machines, which are widely available in hospitals and clinics. It is common practice for expert radiologists to interpret various radiographic images manually. Training a deep learning network with these images provides medical staff with valuable help in diagnosing COVID-19 patients. Such a scenario helps the developing countries, due to the availability of X-ray machines, but there is a large shortage in the availability of the experts. This study aims to compare the effectiveness of four widely used deep learning models in terms of accuracy and processing time to determine the best. These models include ResNet-50, AlexNet, GoogLeNet, and VGG16. The findings indicate that the processing time is proportional to the accuracy. ResNet-50 had the highest diagnostic accuracy but was the slowest in processing time. Both CPU (serial) and GPU (parallel) platforms are used in this comparative study. When the models used a CPU platform, the processing time was in the range of 1–1.5 s, but it dramatically decreased to the range of 7.1–20.7 ms when running on a parallel platform (GPU). Hence, the GPU platform is very successful and was deemed best suited for real-time diagnostic applications.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, an incremental k-prototypes algorithm based on the merge technique is proposed to ensure the attribute learning task for mixed data using k-propositions algorithm and the results show an improvement of the performance of the k-Prototypes algorithm used for incremental attribute learning.
Abstract: Humongous amount of data is continuously generated by thousands of data sources, which simultaneously send data records that include a wide variety of elements of mixed types such as electronic purchases, information from social networks. They should be processed sequentially and incrementally over flexible time windows and then used for different analyzes. On account on these new instances that includes new attributes which have to be learned as development proceeds, called data stream, this paper tackles the incremental attribute learning task for mixed data using k-prototypes algorithm. Firstly, we propose a novel Incremental k-prototypes algorithm based on the merge technique to ensure the attribute learning task. Subsequently, we present the experiments to evaluate our new method using several real mixed data sets. The results show an improvement of the performance of the k-prototypes algorithm used for incremental attribute learning where the proposed Incremental k-prototypes method gives better results regarding the k-prototypes algorithms.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors compared the accuracy of different processing methods of joint coordinates feature in the artificial neural network, to specify which methods makes possible a more assertive network, and also applied two types of training: combined and individual training.
Abstract: This work presents an application of a convolutional neural network (CNN) to recognize gestures through images captured by a Kinect sensor. The objective is to compare the accuracy of different processing methods of joint coordinates feature in the artificial neural network, to specify which methods makes possible a more assertive network. The Microsoft Research Cambridge (MSRC-12) is the dataset used, it consists in a sequence of human body articulation movements represented by the Kinect Skeleton. In addition, the FastDTW algorithm is employed to normalize the data frames number. Three different methods are proposed in this paper: the 3D coordinates method, the subtraction method and the normalization method for training the CNN. We have also applied two types of training: combined and individual training. Using combined training is obtained with 3D coordinates method an accuracy rate of 87,50%, using subtraction method the accuracy rate is 87,16% and with normalization method the accuracy rate obtained is 76,93%. The average accuracy rate for 3D coordinate method using individual training is 91,70%, for subtraction method is 92,48% and for normalization method is 83,54%.

Book ChapterDOI
12 Dec 2020
TL;DR: In this article, the authors present a survey on the evolution of software, XML documents, database, and data warehouse, focusing on research issues related to versioning of software and XML documents.
Abstract: The evolution phenomenon has ever been spanning as various fields as politics, economy, laws, health, education and technology. Indeed, it an incremental process that consists in accommodating unavoidable changes to answer new real-life requirements within respective environments. This can be concretely performed by updating, adding or removing rules, services or simply any business knowledge. In computer science, many research axes have so far emerged to reflect such evolution needs. This can be obviously witnessed through the literature work related to the evolution issues within software, databases, data warehouses and ontology. To keep an evolution history, several versions should be naturally managed and saved. This survey aims at analyzing research issues related to versioning of software, XML documents, database and data warehouse. Ontology researchers may be inspired by advances made in these research fields and apply the techniques developed to manage ontology versions.

Book ChapterDOI
12 Dec 2020
TL;DR: In this article, two global path planners, namely the Dijkstra and A* algorithms, were evaluated in two different environments (symmetrical and asymmetric) in order to evaluate the performance of both algorithms.
Abstract: In this work, two global path planners will be evaluated, specifically the Dijkstra and A* algorithms. For this evaluation, a mobile robot that processes ROS (Robot Operating System) will be used. Tests were carried out in two different environments (symmetrical and asymmetric), in order to evaluate the performance of both algorithms. The mobile robot used was the TurtleBot 3 Burger, which has open source software. The results showed a small, but better performance of the Dijkstra algorithm, compared to the A* algorithm.

Book ChapterDOI
12 Dec 2020
TL;DR: In this paper, the authors presented an improvement of state-of-the-art closed-loop active model diagnosis (CLAMD), which utilizes weighted Bhattacharyya coefficients evaluated at the vertices of the polytopic constraint set to provide a good tradeoff between computational efficiency and satisfactory input choice for separation of candidate models of a system.
Abstract: This manuscript presents an improvement of state-of-the-art Closed-Loop Active Model Diagnosis (CLAMD). The proposed method utilizes weighted Bhattacharyya coefficients evaluated at the vertices of the polytopic constraint set to provide a good trade-off between computational efficiency and satisfactory input choice for separation of candidate models of a system. A simulation of a dynamical system shows the closed-loop performance not being susceptible to the combination of candidate models. Additionally, the broad applicability of CLAMD is shown by means of a demonstrative application in automated visual inspection. This application involves sequential determination of the optimal object inspection region for the next measurement. As compared to the conventional approach using one full image to recognize handwritten digits from the MNIST dataset, the novel CLAMD-approach needs significantly (up to 78%) less data to achieve similar accuracy.