scispace - formally typeset
Search or ask a question

Showing papers presented at "Intelligent Systems Design and Applications in 2019"


Book ChapterDOI
03 Dec 2019
TL;DR: A new deep convolutional neural network model to classify 118 fruits classes with an accuracy of 100% on a divided image set from the training set and 99.6% on the test set is proposed, which outperformed previous methods.
Abstract: Fruits classification is a challenging task due to the several types of fruits. To classify fruits more effectively, we propose a new deep convolutional neural network model to classify 118 fruits classes. The proposed model combines two aspects of convolutional neural networks, which are traditional and parallel convolutional layers. The parallel convolutional layers have been employed with different filter sizes to have better feature extraction. It also helps with backpropagation since the error can backpropagate from multiple paths. To avoid gradient vanishing problem and to have better feature representation, we have used residual connections. We have trained and tested our model on Fruits-360 dataset. Our model achieved an accuracy of 100% on a divided image set from the training set and achieved 99.6% on the test set, which outperformed previous methods.

10 citations


Book ChapterDOI
03 Dec 2019
TL;DR: This research work tried to propose a mathematical model to select the service provider based on some sampled non-functional attribute set and it is shown that this model can be applied to Cloud Service Providers also.
Abstract: The successful service allocation is a result of the successful decisions made by the Cloud Service Provider. The decision must be effective and on time, for survival, to get competitive advantages and to increase profitability of an organization. In this scenario, the increasing number of Cloud Service Providers posts the challenge to measure the services on the basis of non-functional attributes also. In the literature, researchers have developed the optimization techniques to distribute the workload. A lot of work is done to optimize the functional attributes but through this research work we tried to propose a mathematical model to select the service provider based on some sampled non-functional attribute set.

7 citations


Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, an intelligent solid waste monitoring system is developed using Internet of Things (IoT) and cloud computing technologies, where the fill level of solid waste in each of the containers, which are strategically situated across the communities, is detected using ultrasonic sensors.
Abstract: Indiscriminate disposal of solid waste is a major issue in urban centers of most developing countries and it poses a serious threat to healthy living of the citizens. Access to reliable data on the state of solid waste at different locations within the city will help both the local authorities and the citizens to effectively manage the menace. In this paper, an intelligent solid waste monitoring system is developed using Internet of Things (IoT) and cloud computing technologies. The fill level of solid waste in each of the containers, which are strategically situated across the communities, is detected using ultrasonic sensors. A Wireless Fidelity (Wi-Fi) communication link is used to transmit the sensor data to an IoT cloud platform known as ThingSpeak. Depending on the fill level, the system sends appropriate notification message (in form of tweet) to alert relevant authorities and concerned citizen(s) for necessary action. Also, the fill level is monitored on ThingSpeak in real-time. The system performance shows that the proposed solution may be found useful for efficient waste management in smart and connected communities.

7 citations


Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, the authors applied deep learning models based on two pre-trained Convolutional Neural Networks (CNNs) to diagnose tuberculosis from 148 Ziehl-Neelsen stained sputum smear microscopic images from two different datasets.
Abstract: Tuberculosis is a contagious disease and is one of the leading causes of death especially in low and middle income countries such as Uganda. While there are several ways to diagnose tuberculosis, sputum smear microscopy is the commonest method practised. However, this method can be error prone and also requires trained medical personnel who are not always readily available. In this research, we apply deep learning models based on two pre-trained Convolutional Neural Networks: VGGNet and GoogLeNet Inception v3 to diagnose tuberculosis from 148 Ziehl-Neelsen stained sputum smear microscopic images from two different datasets. These networks are used in three different scenarios, namely, fast feature extraction without data augmentation, fast feature extraction with data augmentation and fine-tuning. Our results show that using Inception v3 for fast feature extraction without data augmentation produces the best results with an accuracy score of 86.7%. This provides a much better approach to disease diagnosis based on the use of diverse datasets from different sources and the results of this work can be leveraged in medical imaging for faster tuberculosis diagnosis.

6 citations


Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, the Neural Network based training algorithm has been discussed briefly and Neuro-Fuzzy system structure and Self-Organizing Maps (SOM) has been implemented in this paper which provides the supremacy of the proposed work.
Abstract: In this paper, the Neural Network based training algorithm has been discussed briefly. Correlation of coefficient (Training, Validation, Testing) using the Levenberg-Marquardt algorithm for superconductivity has been observed graphically. The same operation has been performed applying Bayesian Regularization algorithm and Scaled Conjugate Gradient algorithm. Mean Square Error and Regression has been calculated according to training, validation and testing using Bayesian Regularization, Scaled Conjugate Gradient and Levenberg-Marquardt algorithm for superconductor dataset. The target variable is the critical temperature of the superconductor. The regression value of Scaled Conjugate Gradient, Bayesian Regularization, Levenberg-Marquardt algorithm for superconductor dataset is 0.809214,0,0.854644, respectively which concludes that the Levenberg-Marquardt algorithm provides comparatively larger regression (R) value among them in validation state. Error histogram with 20 bins has been explained visually with simulation Bayesian Regularization, Levenberg-Marquardt, and Scaled Conjugate Gradient algorithm. Neuro-Fuzzy system structure and Self-Organizing Maps (SOM) has also been implemented in this paper which provides the supremacy of the proposed work. The main benefit of SOM is that it is a useful multivariate visualization technique that permits the multidimensional data to be exposed as a 2-dimensional map.

6 citations


Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, the authors presented an investigation of the so-called word embeddings, a recent machine learning paradigm, with the aim of detecting plagiarism in French documents and achieved a plagiarism detection accuracy of 62% using Gutenberg project novels.
Abstract: Plagiarism is the process of using the ideas of another without naming the source. Plagiarism is unacceptable and could be viewed as cheating and stealing. Plagiarism detection is necessary but complicated as it is often facing significant challenges given the large amount of material on the World-wide-web and the limited access to a substantial part of them. This paper presents an investigation of the so-called word embeddings, a recent machine learning paradigm, with the aim of detecting plagiarism in French documents. The proposed model performs better than state of the art methods and achieves a plagiarism detection accuracy of 62% using Gutenberg project novels.

5 citations


Book ChapterDOI
03 Dec 2019
TL;DR: The genealogy of AI is explored, fundamental differences between traditional science and data science are elucidated, and it is concluded that truth in data science may be regarded as ‘post-truth’ intrinsically different from truth in traditional science.
Abstract: Artificial Intelligence has developed from the early Symbolic AI to the present Statistical AI, which has yielded a new kind of science, namely data science. In the present article we first explore the genealogy of AI, and then elucidate fundamental differences between traditional science and data science from different points of view, inter alia, in light of the nature of workflow taken, data collected, knowledge generated, law abstracted, goal pursued, and truth thus reached. And we thereby articulate the fundamental tension between the traditional conception of science and the novel conception of science as exemplified by data science. We finally conclude that truth in data science may be regarded as ‘post-truth’ intrinsically different from truth in traditional science.

5 citations


Book ChapterDOI
03 Dec 2019
TL;DR: This work presents the Bayesian Anomaly Detection And Classification (BADAC) formalism, which provides a unified statistical approach to classification and anomaly detection within a hierarchical Bayesian framework and introduces a new metric, the Rank-Weighted Score (RWS), that is particularly suited to evaluating an algorithm’s ability to detect anomalies.
Abstract: Statistical uncertainties are rarely incorporated into machine learning algorithms, especially for anomaly detection. Here we present the Bayesian Anomaly Detection And Classification (BADAC) formalism, which provides a unified statistical approach to classification and anomaly detection within a hierarchical Bayesian framework. BADAC deals with uncertainties by marginalising over the unknown, true, value of the data. Using simulated data with Gaussian noise as an example, BADAC is shown to be superior to standard algorithms in both classification and anomaly detection performance in the presence of uncertainties. Additionally, BADAC provides well-calibrated classification probabilities, valuable for use in scientific pipelines.

5 citations


Book ChapterDOI
03 Dec 2019
TL;DR: A model is suggested which include feature extraction from 3 types of features (acoustic, prosodic and phonetic) and classification is achieved using several machine learning classifiers and the results show that the proposed model can be highly recommended for classifying PD in healthy individuals with an accuracy of 99.50% obtained by Support Vector Machine.
Abstract: Parkinson’s disease (PD) is a neurodegenerative disease ranked second after Alzheimer’s disease. It affects the central nervous system and causes a progressive and irreversible loss of neurons in the dopaminergic system, that insidiously leads to cognitive, emotional and language disorders. But until day there is no specific medication for this disease, the drug treatments that exist are purely symptomatic, that’s what encourages researchers to consider non-drug techniques. Among these techniques, speech processing becomes a relevant and innovative field of investigation and the use of machine-learning algorithms that provide promising results in the distinction between PD and healthy people. Otherwise many other factors such as feature extraction, number of feature, type of features and the classifiers used they all influence on the prediction accuracy evaluation. The aim of this study is to show the importance of this last factor, a model is suggested which include feature extraction from 3 types of features (acoustic, prosodic and phonetic) and classification is achieved using several machine learning classifiers and the results show that the proposed model can be highly recommended for classifying PD in healthy individuals with an accuracy of 99.50% obtained by Support Vector Machine (SVM).

4 citations


Book ChapterDOI
03 Dec 2019
TL;DR: A scheme for detection and prevention from grayhole attack is proposed for dynamic topology MANETs, a dynamic topological wireless network in which group of mobile nodes are interconnected with other nodes.
Abstract: Mobile Ad-hoc Network (MANETs) is a dynamic topological wireless network in which group of mobile nodes are interconnected with other nodes. Each node can communicate with each other by cooperating with other nodes hence do not require a centralized administrator. Due to dynamic topology MANETs are infrastructure-less so which makes easy for any malicious node to enter the network and disrupt normal functioning of network and security is compromised hence MANETs suffer from a wide range of attacks. One of these attacks is Grayhole attack in which a malicious node silently enters the network and partially drops the data packets passing through it. Due to partial dropping of packets it is difficult to detect grayhole node. We have proposed a scheme for detection and prevention from grayhole attack.

4 citations


Book ChapterDOI
03 Dec 2019
TL;DR: A new feature extractor is created that functions as Mask R-CNN kernel for lung image segmentation, yielding highly effective and promising results, evidently surpassing the standard results generated by MaskR-CNN.
Abstract: According to the World Health Organization the automatic segmentation of lung images is a major challenge in the processing and analysis of medical images, as many lung pathologies are classified as severe and such conditions bring about 250,000 deaths each year and by 2030 it will be the third leading cause of death in the world. Mask R-CNN is a recent and excellent Convolutional Neural Network model for object detection, localization, segmentation of natural image object instances. In this study, we created a new feature extractor that functions as Mask R-CNN kernel for lung image segmentation, yielding highly effective and promising results. Bringing a new approach to your training that significantly minimizes the number of images used by the Convolutional Network in its training to generate good results, thereby also decreasing the number of interactions performed by network learning. The model obtained results evidently surpassing the standard results generated by Mask R-CNN.

Book ChapterDOI
03 Dec 2019
TL;DR: A web based system that provides information for the discovery of the blood bank centers and human donors with the highest proximity during emergencies to aid users in obtaining blood faster rather than going from one hospital to another in search for a specific blood type.
Abstract: The blood is a connective tissue in the body and one of the most critical elements of human life. The shortage of this life-saving fluid has become a recurrent problem to deliver medical care in many countries, because in emergencies, relatives of patients run around to get specific blood type when unavailable at the medical institution, without adequate information on the closest available source. While there are existing blood bank management systems that help locate available blood bank centers with the needed blood type, they do not provide information on the nearest center and donor. This research therefore developed a web based system that provides information for the discovery of the blood bank centers and human donors with the highest proximity during emergencies. Web development technologies were used, and the Google Map API was used to track, calculate and display the location of each blood bank and donor. The system thus aid users in obtaining blood faster rather than going from one hospital to another in search for a specific blood type to reduce the number of deaths caused by lack of blood during emergencies.

Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, a novel event detection is proposed avoiding the use of user predefined thresholds, which makes solutions more general and easy to deploy, and a new set of features are extracted from a time window whenever a peak is detected, classifying it with a Neural Network.
Abstract: Fall Detection (FD) has drawn the attention of the research community for several years. A possible solution relies on on-wrist wearable devices including tri-axial accelerometers performing FD autonomously. This type of approaches makes use of an event detection stage followed by some pre-processing and a final classification stage. The event detection stage is basically performed using thresholds or a combination of thresholds and finite state machines. In this research, a novel event detection is proposed avoiding the use of user predefined thresholds; this fact represents the main contribution of this study. It is worth noticing that avoiding the use of thresholds make solutions more general and easy to deploy. Moreover, a new set of features are extracted from a time window whenever a peak is detected, classifying it with a Neural Network. The proposal is evaluated using the UMA Fall, one of the publicly available simulated fall detection data sets.

Book ChapterDOI
03 Dec 2019
TL;DR: This paper aims to recognize cluster structures by internal clustering validity indexes to improve the results of the external indexes and improves upon the state of the art methods.
Abstract: Clustering is a set of unsupervised techniques for Machine Learning, that automatically recognizes the associations between samples from datasets. The unsupervised techniques use optimization methods to discover the unreviewed patterns within data. Hence, selecting the ideal clustering solution is a complex task and it directly depends on the problem at hand. Market segmentation, image comprehension, topic modeling, and social network analysis are some examples of data pattern recognition areas. Each one of these examples requires different clustering-geometry approaches. The choice of an appropriate clustering solution usually optimizes the number of clusters using internal validation measures, which indicates both the separability and distance relations between the clusters’ entities. Meanwhile, external indexes reflect the clustering efficiency and assess data distributions using real labels. This paper aims to recognize cluster structures by internal clustering validity indexes to improve the results of the external indexes. The work improves upon the state of the art methods. The experiments result in enhancements of 67% in the Adjusted Rand Index, 81% in the Normalized Mutual Information and 50% in the Clustering Accuracy for the total datasets.

Book ChapterDOI
03 Dec 2019
TL;DR: This work proposes a secure architecture dedicated to an IoT-based drone in a LoRa context by deploying an Id-Based Signcryption method and temporary identities for the drone’s authentication process.
Abstract: The security of the Internet of Things (IoT) is now at the embryonic stage. In fact, unrelenting innovation has slowed down the implementation of security and made user privacy an easy target. Through this work, we propose a secure architecture dedicated to an IoT-based drone in a LoRa context. For this purpose, we focused on the drone’s authentication process by deploying an Id-Based Signcryption method and temporary identities.

Book ChapterDOI
03 Dec 2019
TL;DR: The area occupied by segmented blood vessels from fundus images is used to detect glaucoma using an improved U-net Convolutional Neural Network and the results demonstrate a more reliable, stable and efficient method.
Abstract: Several techniques have been employed to detect glaucoma from optic discs. Some techniques involve the use of the optic cup-to-disc ratio (CDR) while others use the neuro-retinal rim width of the optic disc. In this work, we use the area occupied by segmented blood vessels from fundus images to detect glaucoma. Blood vessels segmentation is done using an improved U-net Convolutional Neural Network (CNN). The area occupied by the blood vessels is then extracted and used to diagnose glaucoma. The technique is tested on the DR-HAGIS database and the HRF database. We compare our result with a similar method called the ISNT-ratio which involves the use of the Inferior, Superior, Nasal and Temporal neuro- retina rims. The ISNT-ratio is expressed as the ratio of the sum of blood vessels in the Inferior and the Superior to the sum of blood vessels in the Nasal and Temporal. Our results demonstrate a more reliable, stable and efficient method of detecting glaucoma from segmented blood vessels. Our results also show that segmented blood vessels from healthy fundus images cover more area than those from glaucomatous and diabetic fundus images.

Book ChapterDOI
03 Dec 2019
TL;DR: This work uses two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them and investigates the importance of the recent advances in machine learning including the deep kernel learning.
Abstract: Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: zero crossing rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

Book ChapterDOI
03 Dec 2019
TL;DR: A new direction to collect and use DJs to fit social requirements externalized and collected in living labs is shown, to develop the process of evidence-based innovation, i.e., the loop of living humans’ interaction to create dimensions of performance in businesses.
Abstract: Data Jackets are human-made metadata for each dataset, reflecting peoples’ subjective or potential interests. By visualizing the relevance among DJs, participants in the market of data think and talk about why and how they should combine the corresponding datasets. Even if the owners of data may hesitate to open their data to the public, they can present the DJs in the Innovators Marketplace on Data Jackets that is a platform for innovations. Here, participants communicate to find ideas to combine/use/reuse data or future collaborators. Furthermore, explicitly or implicitly required data can be searched by the use of tools developed on DJs, which enabled, for example, analogical inventions of data analysis methods. Thus, we realized a data-mediated birthplace of seeds in business and science. In this paper, we show a new direction to collect and use DJs to fit social requirements externalized and collected in living labs. The effect of living labs here is to enhance participants’ sensitivity to the contexts in the open society according to the author’s practices, and the use of DJs to these contexts means to develop the process of evidence-based innovation, i.e., the loop of living humans’ interaction to create dimensions of performance in businesses.

Book ChapterDOI
03 Dec 2019
TL;DR: Intrusion Detection System (IDS) is the most used mechanism for intrusion detection but with the evolution of Intrusion detection datasets size, there is a new challenge which is storing those large datasets in Cloud Infrastructure and analyzing datasets traffic using Big data technology.
Abstract: Intrusion Detection System (IDS) is the most used mechanism for intrusion detection. Traditional IDS have been used to detect suspicious behaviors in network communication and hosts. However, with the evolution of Intrusion detection datasets size, we faced a new challenge which is storing those large datasets in Cloud Infrastructure and analyzing datasets traffic using Big data technology. Furthermore, some Cloud providers allow deploying and configuring IDS for the user.

Book ChapterDOI
03 Dec 2019
TL;DR: This article is extracting a set of acoustic features from Dementia Bank conversations from subjects with and without Alzheimer’s disease and trained with Machine Learning (ML) algorithms to testing the detection accuracy.
Abstract: Automatic diagnosis and monitoring of Alzheimer’s Disease (AD) can have a significant impact on society as well as the well-being of patients. It is known that Alzheimer’s disease (AD) influences the language abilities of affected peoples. Hence, the diagnosis of Alzheimer’s disease using speech-based features is gaining growing attention. The purpose of this article is extracting a set of acoustic features from Dementia Bank conversations from subjects with and without Alzheimer’s disease. Extracted features will be trained with Machine Learning (ML) algorithms to testing the detection accuracy.

Book ChapterDOI
03 Dec 2019
TL;DR: The present study proposes a new technique based on the measure of Semantic Similarity (SS) between the titles of co-cited papers using word-based SS measures and finds the SS measures behave the same as human judgement for the lexical similarity and can be consequently used for the automatic assessment of similarity between co- cited papers.
Abstract: Co-citation analysis can be exploited as a bibliometric technique used for mining information on the relationships between scientific papers. Proposed methods rely, however, on co-citation counting techniques that slightly take the semantic aspect into consideration. The present study proposes a new technique based on the measure of Semantic Similarity (SS) between the titles of co-cited papers. Several computational measures rely on knowledge resources to quantify the semantic similarity, such as the WordNet «is a» taxonomy. Our proposal analyzes the SS between the titles of co-cited papers using word-based SS measures. Two major analytical experiments are performed: the first includes the benchmarks designed for testing word-based SS measures; the second exploits the dataset DBLP (DBLP: Digital Bibliography & Library Project.) citation network. As a result, we found the SS measures behave the same as human judgement for the lexical similarity and can be consequently used for the automatic assessment of similarity between co-cited papers. The analysis of highly repeated co-citations demonstrates that the different SS measures display almost similar behaviours, with slight differences due to the distribution of the provided SS values. Furthermore, we note a low percentage of similar referred papers into the co-citations.

Book ChapterDOI
03 Dec 2019
TL;DR: In this article, a hybrid VNS/TABUB search algorithm was used to determine the correct location of P centers in Puebla State, where 217 municipalities within the state had at least one reported case of childbirth deaths.
Abstract: The p-median problem is an excellent tool to solve location facilities problems. There are many applications in which the problem of the P-median can be used; some of these can be location of fire stations, location of police stations, and location of distribution centers, among others. This work introduces the use of a Hybrid VNS/TABU Search Algorithm to determine the correct location of P centers. Experimental results on the reference point and the application to a real case indicate the potential of the proposed approach, which can produce good solutions with the use of VNS/TABU Search. In this paper, we show the performed tests, and the results of the real problem. The data corresponds to the case of Puebla State. To start this research, we considered the 217 municipalities within the state with at least one reported case of childbirth deaths. Using the aforementioned information, we analyze where to place a series of medical care units.

Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, feature extraction from ECG signals was performed using a combination of three new types of characteristics: MFCC, ZCR, and entropy, which made it possible to improve the efficiency of the identification system to reach a performance rate equal to 100% for the two bases.
Abstract: Human biometric identification based on the ElectroCardioGram (ECG) is relatively new. This domain is intended to recognize individuals since the ECG has unique characteristics for each individual. These features are robust against forgery. In this study, feature extraction from ECG signals was performed using a combination of three new types of characteristics: MFCC, ZCR, and entropy. We proposed to apply classification methods: K Nearest Neighbors (KNN) and support vector machines (SVM) for human biometric identification. For evaluation we used two bases, namely MIT-BIH arrhythmia and normal sinus rhythm obtained from the Physionet database. For the MIT-BIH database, we used 47 individuals, each recording contains ECG data recorded for 15 s and in the SNR database, we used 18 individuals, the duration of each recording is 10 s. The analysis of the results obtained shows that the combination of all the features proposed makes it possible to improve the efficiency of our identification system to reach a performance rate equal to 100% for the two bases.

Book ChapterDOI
03 Dec 2019
TL;DR: This article analyzed the answers of questionnaires carried out at a public university from 2012 to 2018 to capture students' opinions automatically through a sentiment classifier, and evaluated classifiers such as SVM, Naive Bayes, Logistic Regression, and Neural Networks.
Abstract: School dropout is a major challenge that should be mitigated by universities. Early identification of students’ dissatisfaction allows detecting problems in institutions, being useful for the decision-making process. In this context, opinion mining algorithms can help in the identification of students’ opinions in institutional evaluation surveys. In this paper, we analyzed the answers of questionnaires carried out at a public university from 2012 to 2018 to capture students’ opinions automatically through a sentiment classifier. We evaluated classifiers such as SVM, Naive Bayes, Logistic Regression, and Neural Networks. The results of the best classifier indicate an accuracy of 87% for the classification task.

Book ChapterDOI
03 Dec 2019
TL;DR: In this article, a vision system based on the YOLO algorithm was used to identify static objects that could be obstacles in the path of a mobile robot, using a Microsoft Kinect sensor.
Abstract: This work presents a vision system based on the YOLO algorithm to identify static objects that could be obstacles in the path of a mobile robot. In order to identify the objects and its distances a Microsoft Kinect sensor was used. In addition, a Nvidia Jetson TX2 GPU was used to increase the image processing algorithm performance. Our experimental results indicate that the YOLO network has detected all the predefined obstacles for which it has been trained with good reliability and the calculus of the distance using the depth information returned by Microsoft Kinect had an error below of 3,64%.

Book ChapterDOI
03 Dec 2019
TL;DR: In this paper, a hardware accelerator is used to speed up the holographic processing by taking advantage of its parallel specificities (architecture) for real-time 3D holographic display.
Abstract: With increasing popularity holographic method, 3D scene and augmented reality, needless to say, that 3D holography would be playing the most important role of real-time recording display. This paper demonstrates a setup that shows and records the scene in a real-time 3D appearance. We speed up the holographic processing by using a hardware accelerator to take advantage of its parallel specificities (architecture). The results clarify the system’s ability for viewing the holographic objects by applying four cameras running at the same time with a difference in partial of millisecond attributed to reason the clock of cameras and the VGA update.

Book ChapterDOI
03 Dec 2019
TL;DR: SDN (Software Defined Networking) is a new network paradigm that facilitates network management by separating network control logic from switches and routers and is able to provide flexible QoS control and perform network resources allocation for multimedia and real-time applications.
Abstract: SDN (Software Defined Networking) is a new network paradigm that facilitates network management by separating network control logic from switches and routers. Obviously, the separation of the network policy and the hardware implementation help improve quality of service (QoS) provided by the network. Therefore, SDN is able to provide flexible QoS control and perform network resources allocation for multimedia and real-time applications.

Book ChapterDOI
03 Dec 2019
TL;DR: This paper aims to develop an ontology for classifying change requests as either functional changes (FC) or technical changes (TC), which is further classified into nine categories including the ISO 25010 quality characteristics and Project requirement and constraints.
Abstract: Requirements for software system projects are becoming increasingly exposed to a large number of change requests. Change requests captured in natural language are difficult to analyze and evaluate. This may lead to major problems, such as requirements creep and ambiguity. To provide an appropriate understanding of a change request in a systematic way, this paper aims to develop an ontology for classifying change requests as either functional changes (FC) or technical changes (TC). Technical changes are further classified into nine categories including the ISO 25010 quality characteristics and Project requirement and constraints. To establish a comprehensive representation of change requests, we collected users’ reviews from PROMISE repository and classified them using the protege editor. The feasibility of the proposed approach is illustrated through examples taken from PROMISE repository.

Book ChapterDOI
03 Dec 2019
TL;DR: The purpose of this research is to present biomedical information retrieval system in order to know more about their strengths and weakness and propose an approach that tries to resolve some gaps and gives some solution to the existing systems and engine retrieval by giving an insight into the images captioning benefit in cross-modality retrieval.
Abstract: Recent works in deep learning using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) models have yielded state of the art results on a variety of image processing tasks. Multimodal representation, especially Image captioning is gaining popularity due to their primordial role in constricting heterogeneity gap among different modalities which are very helpful in cross-modality analysis tasks. The uncountable amounts of medical images, as well as medical documents, need to be processed to discover hidden knowledge. The purpose of this research is to present biomedical information retrieval system in order to know more about their strengths and weakness. Then we will propose our approach that tries to resolve some gaps and gives some solution to the existing systems and engine retrieval by giving an insight into the images captioning benefit in cross-modality retrieval.

Book ChapterDOI
03 Dec 2019
TL;DR: The possibility of integrating Symbolic and Statistical AI is thought of, and Quantum Linguistics is discussed, and a novel account of them from the standpoints of Symbolic/Statistical/Integrated AI is given, elucidating the nature of machine biases in them and applying it to cognitive bias problems in the Kahneman-Tversky tradition.
Abstract: Statistical AI is cutting-edge technology in the present landscape of AI research whilst Symbolic AI is generally regarded as good old-fashioned AI. Even so, we contend that induction, i.e., learning from empirical data, cannot constitute a full-fledged form of intelligence on its own, and it is necessary to combine it with deduction, i.e., reasoning on theoretical grounds, in order to achieve the ultimate goal of Strong AI or Artificial General Intelligence. We therefore think of the possibility of integrating Symbolic and Statistical AI, and discuss Quantum Linguistics by Bob Coecke et al., which, arguably, may be seen as the categorical integration of Symbolic and Statistical AI, and as a paradigmatic case of Integrated AI in Natural Language Processing. And we apply it to cognitive bias problems in the Kahneman-Tversky tradition, giving a novel account of them from the standpoints of Symbolic/Statistical/Integrated AI, and thus elucidating the nature of machine biases in them.