scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2018"


Journal ArticleDOI
TL;DR: The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms.
Abstract: The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography.

165 citations


Journal ArticleDOI
TL;DR: This paper aims to create a model of deep Auto-encoder and restricted Boltzmann machine that can reconstruct normal transactions to find anomalies from normal patterns and uses the Tensorflow library from Google to implement AE, RBM, and H2O by using deep learning.
Abstract: Frauds have no constant patterns They always change their behavior; so, we need to use an unsupervised learning Fraudsters learn about new technology that allows them to execute frauds through online transactions Fraudsters assume the regular behavior of consumers, and fraud patterns change fast So, fraud detection systems need to detect online transactions by using unsupervised learning, because some fraudsters commit frauds once through online mediums and then switch to other techniques This paper aims to 1) focus on fraud cases that cannot be detected based on previous history or supervised learning, 2) create a model of deep Auto-encoder and restricted Boltzmann machine (RBM) that can reconstruct normal transactions to find anomalies from normal patterns The proposed deep learning based on auto-encoder (AE) is an unsupervised learning algorithm that applies backpropagation by setting the inputs equal to the outputs The RBM has two layers, the input layer (visible) and hidden layer In this research, we use the Tensorflow library from Google to implement AE, RBM, and H2O by using deep learning The results show the mean squared error, root mean squared error, and area under curve

162 citations


Journal ArticleDOI
TL;DR: In this approach, the use of the 3-Dimensional Convolutional Neural Networks (3D-CNN) is investigated using a multi-channel EEG data for emotion recognition and it is found that the proposed method is able to achieve recognition accuracies outperforming the state of the art methods.
Abstract: Emotion recognition is a crucial problem in Human-Computer Interaction (HCI) Various techniques were applied to enhance the robustness of the emotion recognition systems using electroencephalogram (EEG) signals especially the problem of spatiotemporal features learning In this paper, a novel EEG-based emotion recognition approach is proposed In this approach, the use of the 3-Dimensional Convolutional Neural Networks (3D-CNN) is investigated using a multi-channel EEG data for emotion recognition A data augmentation phase is developed to enhance the performance of the proposed 3D-CNN approach And, a 3D data representation is formulated from the multi-channel EEG signals, which is used as data input for the proposed 3D-CNN model Extensive experimental works are conducted using the DEAP (Dataset of Emotion Analysis using the EEG and Physiological and Video Signals) data It is found that the proposed method is able to achieve recognition accuracies 8744% and 8849% for valence and arousal classes respectively, which is outperforming the state of the art methods

158 citations


Journal ArticleDOI
TL;DR: This is the first paper that attempts to provide a comprehensive IoT attacks model based on a building-blocked reference model, which consists of four main components: physical objects, protocols covering whole IoT stack, data, and 4) software.
Abstract: Internet of Things (IoT) has not yet reached a distinctive definition. A generic understanding of IoT is that it offers numerous services in many domains, utilizing conventional internet infrastructure by enabling different communication patterns such as human-to-object, object-to-objects, and object-to-object. Integrating IoT objects into the standard Internet, however, has unlocked several security challenges, as most internet technologies and connectivity protocols have been specifically designed for unconstrained objects. Moreover, IoT objects have their own limitations in terms of computation power, memory and bandwidth. IoT vision, therefore, has suffered from unprecedented attacks targeting not only individuals but also enterprises, some examples of these attacks are loss of privacy, organized crime, mental suffering, and the probability of jeopardizing human lives. Hence, providing a comprehensive classification of IoT attacks and their available countermeasures is an indispensable requirement. In this paper, we propose a novel four-layered IoT reference model based on building blocks strategy, in which we develop a comprehensive IoT attack model composed of four key phases. First, we have proposed IoT asset-based attack surface, which consists of four main components: 1) physical objects, 2) protocols covering whole IoT stack, 3) data, and 4) software. Second, we describe a set of IoT security goals. Third, we identify IoT attack taxonomy for each asset. Finally, we show the relationship between each attack and its violated security goals, and identify a set of countermeasures to protect each asset as well. To the best of our knowledge, this is the first paper that attempts to provide a comprehensive IoT attacks model based on a building-blocked reference model.

114 citations


Journal ArticleDOI
TL;DR: An architecture to monitor soil moisture, temperature and humidity on small farms is provided to decrease water consumption whilst increasing productivity on small agricultural farms and precisions on them and it is believed that with the above information, the end user can more efficiently schedule farm cultivation, harvesting, irrigation, and fertilization.
Abstract: Internet of Things is one of the most popular subjects nowadays where sensors and smart devices facilitate the provision of information and communication. In IoT, one of the main concepts is wireless sensor networks in which data is collected from all the sensors in a network characterized by low power consumption and a wide range of communication. In this study, an architecture to monitor soil moisture, temperature and humidity on small farms is provided. The main motivation for this study is to decrease water consumption whilst increasing productivity on small agricultural farms and precisions on them. This motivation is further propelled by the fact that agriculture is the backbone of some towns and most villages in most of the countries. Furthermore, some countries depend on farming as the main source of income. Putting the above-mentioned factors into consideration, the farm is divided into regions; the proposed system monitors soil moisture, humidity and temperature in the respective regions using wireless sensor networks, internet of things and sends a report to the end user. The report contains, as part of the information, a 10-day weather forecast. We believe that with the above information, the end user (farmer) can more efficiently schedule farm cultivation, harvesting, irrigation, and fertilization.

93 citations


Journal ArticleDOI
TL;DR: Results show that both random forest and ADA boost outperform all other techniques with almost the same accuracy 96%, and both Multi-layer perceptron and Support vector machine can be recommended as well with 94% accuracy.
Abstract: Nowadays, customers have become more interested in the quality of service (QoS) that organizations can provide them. Services provided by different vendors are not highly distinguished which increases competition between organizations to maintain and increase their QoS. Customer Relationship Management systems are used to enable organizations to acquire new customers, establish a continuous relationship with them and increase customer retention for more profitability. CRM systems use machine-learning models to analyze customers’ personal and behavioral data to give organization a competitive advantage by increasing customer retention rate. Those models can predict customers who are expected to churn and reasons of churn. Predictions are used to design targeted marketing plans and service offers. This paper tries to compare and analyze the performance of different machine-learning techniques that are used for churn prediction problem. Ten analytical techniques that belong to different categories of learning are chosen for this study. The chosen techniques include Discriminant Analysis, Decision Trees (CART), instance-based learning (k-nearest neighbors), Support Vector Machines, Logistic Regression, ensemble–based learning techniques (Random Forest, Ada Boosting trees and Stochastic Gradient Boosting), Naive Bayesian, and Multi-layer perceptron. Models were applied on a dataset of telecommunication that contains 3333 records. Results show that both random forest and ADA boost outperform all other techniques with almost the same accuracy 96%. Both Multi-layer perceptron and Support vector machine can be recommended as well with 94% accuracy. Decision tree achieved 90%, naive Bayesian 88% and finally logistic regression and Linear Discriminant Analysis (LDA) with accuracy 86.7%.

81 citations


Journal ArticleDOI
TL;DR: This study examines the use of deep convolutional neural network in the classification of rice plants according to health status based on images of its leaves via transfer learning from an AlexNet deep network.
Abstract: This study examines the use of deep convolutional neural network in the classification of rice plants according to health status based on images of its leaves A three-class classifier was implemented representing normal, unhealthy, and snail-infested plants via transfer learning from an AlexNet deep network The network achieved an accuracy of 9123%, using stochastic gradient descent with mini batch size of thirty (30) and initial learning rate of 00001 Six hundred (600) images of rice plants representing the classes were used in the training The training and testing dataset-images were captured from rice fields around the district and validated by technicians in the field of agriculture

73 citations


Journal ArticleDOI
TL;DR: This paper presents a deep learning approach based on a Convolutional Neural Network (CNN) model for multi-class breast cancer classification, which aims to classify the breast tumors in non-just benign or malignant but the authors predict the subclass of the tumors like Fibroadenoma, Lobular carcinoma, etc.
Abstract: Breast cancer continues to be among the leading causes of death for women and much effort has been expended in the form of screening programs for prevention. Given the exponential growth in the number of mammograms collected by these programs, computer-assisted diagnosis has become a necessity. Computer-assisted detection techniques developed to date to improve diagnosis without multiple systematic readings have not resulted in a significant improvement in performance measures. In this context, the use of automatic image processing techniques resulting from deep learning represents a promising avenue for assisting in the diagnosis of breast cancer. In this paper, we present a deep learning approach based on a Convolutional Neural Network (CNN) model for multi-class breast cancer classification. The proposed approach aims to classify the breast tumors in non-just benign or malignant but we predict the subclass of the tumors like Fibroadenoma, Lobular carcinoma, etc. Experimental results on histopathological images using the BreakHis dataset show that the DenseNet CNN model achieved high processing performances with 95.4% of accuracy in the multi-class breast cancer classification task when compared with state-of-the-art models.

72 citations


Journal ArticleDOI
TL;DR: This paper presents a software bug prediction model based on machine learning (ML) algorithms that shows that the ML approach has a better performance than other approaches.
Abstract: Software Bug Prediction (SBP) is an important issue in software development and maintenance processes, which concerns with the overall of software successes. This is because predicting the software faults in earlier phase improves the software quality, reliability, efficiency and reduces the software cost. However, developing robust bug prediction model is a challenging task and many techniques have been proposed in the literature. This paper presents a software bug prediction model based on machine learning (ML) algorithms. Three supervised ML algorithms have been used to predict future software faults based on historical data. These classifiers are Naive Bayes (NB), Decision Tree (DT) and Artificial Neural Networks (ANNs). The evaluation process showed that ML algorithms can be used effectively with high accuracy rate. Furthermore, a comparison measure is applied to compare the proposed prediction model with other approaches. The collected results showed that the ML approach has a better performance.

71 citations


Journal ArticleDOI
TL;DR: There is a huge need to continuously train and educate the learning organizations’ CEOs about the importance of KM through group works and training programs.
Abstract: In today’s business, knowledge is considered as a core asset in any organization, even it can be considered as important as technological capital. It is part of human abilities and thus human capital. Knowledge management (KM) is becoming a fad in an increasing way so many organizations are trying to apply it in order to enhance their organizational performance. In this paper, literatures were investigated critically in order to show the real influence of knowledge management and some of its practices on organizational performance. It has been founded that KM including knowledge process and infrastructure capabilities affect positively in a huge manner on all aspects of organizational performance directly or indirectly. In the same vein, there is a huge need to continuously train and educate the learning organizations’ CEOs about the importance of KM through group works and training programs.

71 citations


Journal ArticleDOI
TL;DR: Web data mining became an easy and important platform for retrieval of useful information but as increasing growth of data over the internet, it is getting difficult and time consuming for discovering informative knowledge and patterns.
Abstract: Web data mining became an easy and important platform for retrieval of useful information. Users prefer World Wide Web more to upload and download data. As increasing growth of data over the internet, it is getting difficult and time consuming for discovering informative knowledge and patterns. Digging knowledgeable and user queried information from unstructured and inconsistent data over the web is not an easy task to perform. Different mining techniques are used to fetch relevant information from web (hyperlinks, contents, web usage logs). Web data mining is a sub discipline of data mining which mainly deals with web. Web data mining is divided into three different types: web structure, web content and web usage mining. All these types use different techniques, tools, approaches, algorithms for discover information from huge bulks of data over the web.

Journal ArticleDOI
TL;DR: The current threats on Cyber-Physical Systems are investigated and a classification and matrix for these threats are proposed, and a simple statistical analysis of the collected data is conducted using a quantitative approach.
Abstract: Cyber-Physical Systems refer to systems that have an interaction between computers, communication channels and physical devices to solve a real-world problem. Towards industry 4.0 revolution, Cyber-Physical Systems currently become one of the main targets of hackers and any damage to them lead to high losses to a nation. According to valid resources, several cases reported involved security breaches on Cyber-Physical Systems. Understanding fundamental and theoretical concept of security in the digital world was discussed worldwide. Yet, security cases in regard to the cyber-physical system are still remaining less explored. In addition, limited tools were introduced to overcome security problems in Cyber-Physical System. To improve understanding and introduce a lot more security solutions for the cyber-physical system, the study on this matter is highly on demand. In this paper, we investigate the current threats on Cyber-Physical Systems and propose a classification and matrix for these threats, and conduct a simple statistical analysis of the collected data using a quantitative approach. We confirmed four components i.e., (the type of attack, impact, intention and incident categories) main contributor to threat taxonomy of Cyber-Physical Systems.

Journal ArticleDOI
TL;DR: An innovative approach, labeled Enhanced Elite CNN Model Propagation (Enhanced E-CNN-MP), to automatically learn the optimal structure of a CNN based on Genetic Algorithms (GA), which are well-known for non-deterministic problem resolution.
Abstract: In machine learning for computer vision based applications, Convolutional Neural Network (CNN) is the most widely used technique for image classification. Despite these deep neural networks efficiency, choosing their optimal architecture for a given task remains an open problem. In fact, CNNs performance depends on many hyper-parameters namely CNN depth, convolutional layer number, filters number and their respective sizes. Many CNN structures have been manually designed by researchers and then evaluated to verify their efficiency. In this paper, our contribution is to propose an innovative approach, labeled Enhanced Elite CNN Model Propagation (Enhanced E-CNN-MP), to automatically learn the optimal structure of a CNN. To traverse the large search space of candidate solutions our approach is based on Genetic Algorithms (GA). These meta-heuristic algorithms are well-known for non-deterministic problem resolution. Simulations demonstrate the ability of the designed approach to compute optimal CNN hyper-parameters in a given classification task. Classification accuracy of the designed CNN based on Enhanced E-CNN-MP method, exceed that of public CNN even with the use of the Transfer Learning technique. Our contribution advances the current state by offering to scientists, regardless of their field of research, the ability of designing optimal CNNs for any particular classification problem.

Journal ArticleDOI
TL;DR: Clinical diagnoses of coronary heart disease were reliably and accurately derived from the developed DNN classification and prediction models, which can be used to aid healthcare professionals and patients throughout the world to advance both public health and global health, especially in developing countries and resource-limited areas with fewer cardiac specialists available.
Abstract: According to the World Health Organization, cardiovascular disease (CVD) is the top cause of death worldwide. In 2015, over 30% of global deaths was due to CVD, leading to over 17 million deaths, a global health burden. Of those deaths, over 7 million were caused by heart disease, and greater than 75% of deaths due to CVD were in developing countries. In the United States alone, 25% of deaths is attributed to heart disease, killing over 630,000 Americans annually. Among heart disease conditions, coronary heart disease is the most common, causing over 360,000 American deaths due to heart attacks in 2015. Thus, coronary heart disease is a public health issue. In this research paper, an enhanced deep neural network (DNN) learning was developed to aid patients and healthcare professionals and to increase the accuracy and reliability of heart disease diagnosis and prognosis in patients. The developed DNN learning model is based on a deeper multilayer perceptron architecture with regularization and dropout using deep learning. The developed DNN learning model includes a classification model based on training data and a prediction model for diagnosing new patient cases using a data set of 303 clinical instances from patients diagnosed with coronary heart disease at the Cleveland Clinic Foundation. The testing results showed that the DNN classification and prediction model achieved the following results: diagnostic accuracy of 83.67%, sensitivity of 93.51%, specificity of 72.86%, precision of 79.12%, F-Score of 0.8571, area under the ROC curve of 0.8922, Kolmogorov-Smirnov (K-S) test of 66.62%, diagnostic odds ratio (DOR) of 38.65, and 95% confidence interval for the DOR test of [38.65, 110.28]. Therefore, clinical diagnoses of coronary heart disease were reliably and accurately derived from the developed DNN classification and prediction models. Thus, the models can be used to aid healthcare professionals and patients throughout the world to advance both public health and global health, especially in developing countries and resource-limited areas with fewer cardiac specialists available.

Journal ArticleDOI
TL;DR: This study proposes a technique to tune the SVM performance by using grid search method for sentiment analysis and performance of proposed technique is evaluated using three information retrieval metrics: precision, recall and f-measure.
Abstract: Exponential growth in mobile technology and mini computing devices has led to a massive increment in social media users, who are continuously posting their views and comments about certain products and services, which are in their use. These views and comments can be extremely beneficial for the companies which are interested to know about the public opinion regarding their offered products or services. This type of public opinion otherwise can be obtained via questionnaires and surveys, which is no doubt a difficult and complex task. So, the valuable information in the form of comments and posts from micro-blogging sites can be used by the companies to eliminate the flaws and to improve the products or services according to customer needs. However, extracting a general opinion out of a staggering number of users’ comments manually cannot be feasible. A solution to this is to use an automatic method for sentiment mining. Support Vector Machine (SVM) is one of the widely used classification techniques for polarity detection from textual data. This study proposes a technique to tune the SVM performance by using grid search method for sentiment analysis. In this paper, three datasets are used for the experiment and performance of proposed technique is evaluated using three information retrieval metrics: precision, recall and f-measure.

Journal ArticleDOI
TL;DR: A practical approach to acquiring data of temperature, humidity and soil moisture of plants, and an application is developed for 10 days ahead maximum and minimum temperatures forecasting using a type of recurrent neural network.
Abstract: Nowadays, Internet of Things (IoT) is receiving a great attention due to its potential strength and ability to be integrated into any complex system. The IoT provides the acquired data from the environment to the Internet through the service providers. This further helps users to view the numerical or plotted data. In addition, it also allows objects which are located in long distances to be sensed and controlled remotely through embedded devices which are important in agriculture domain. Developing such a system for the IoT is a very complex task due to the diverse variety of devices, link layer technologies, and services. This paper proposes a practical approach to acquiring data of temperature, humidity and soil moisture of plants. In order to accomplish this, we developed a prototype device and an android application which acquires physical data and sends it to cloud. Moreover, in the subsequent part of current research work, we have focused towards a temperature forecasting application. Forecasting metrological parameters have a profound influence on crop growth, development and yields of agriculture. In response to this fact, an application is developed for 10 days ahead maximum and minimum temperatures forecasting using a type of recurrent neural network.

Journal ArticleDOI
TL;DR: An analysis of the performance of filter feature selection algorithms and classification algorithms on two different student datasets shows that there is a 10% difference of prediction accuracies between the results of datasets with different number of features.
Abstract: The main aim of all the educational organizations is to improve the quality of education and elevate the academic performance of students. Educational Data Mining (EDM) is a growing research field which helps academic institutions to improve the performance of their students. The academic institutions are most often judged by the grades achieved by the students in examination. EDM offers different practices to predict the academic performance of students. In EDM, Feature Selection (FS) plays a vital role in improving the quality of prediction models for educational datasets. FS algorithms eliminate unrelated data from the educational repositories and hence increase the performance of classifier accuracy used in different EDM practices to support decision making for educational settings. The good quality of educational dataset can produce better results and hence the decisions based on such quality dataset can increase the quality of education by predicting the performance of students. In the light of this mentioned fact, it is necessary to choose a feature selection algorithm carefully. This paper presents an analysis of the performance of filter feature selection algorithms and classification algorithms on two different student datasets. The results obtained from different FS algorithms and classifiers on two student datasets with different number of features will also help researchers to find the best combinations of filter feature selection algorithms and classifiers. It is very necessary to put light on the relevancy of feature selection for student performance prediction, as the constructive educational strategies can be derived through the relevant set of features. The results of our study depict that there is a 10% difference of prediction accuracies between the results of datasets with different number of features.

Journal ArticleDOI
TL;DR: An RFID based Attendance Management System (AMS) and also information service system for an academic domain by using RFID technology in addition to the programmable Logic Circuit (such as Arduino), and web-based application is proposed.
Abstract: Recently, students attendance have been considered as one of the crucial elements or issues that reflects the academic achievements and the performance contributed to any university compared to the traditional methods that impose time-consuming and inefficiency Diverse automatic identification technologies have been more in vogue such as Radio Frequency Identification (RFID) An extensive research and several applications are produced to take maximum advantage of this technology and bring about some concerns RFID is a wireless technology which uses to a purpose of identifying and tracking an object via radio waves to transfer data from an electronic tag, called RFID tag or label to send data to RFID reader The current study focuses on proposing an RFID based Attendance Management System (AMS) and also information service system for an academic domain by using RFID technology in addition to the programmable Logic Circuit (such as Arduino), and web-based application The proposed system aims to manage student’s attendance recording and provides the capabilities of tracking student absentee as well, supporting information services include students grading marks, daily timetable, lectures time and classroom numbers, and other student-related instructions provided by faculty department staff Based on the results, the proposed attendance and information system is time-effective and it reduces the documentation efforts as well as, it does not have any power consumption Besides, students attendance RFID based systems that have been proposed are also analyzed and criticized respect to systems functionalities and main findings Future directions for further researchers are focused and identified

Journal ArticleDOI
TL;DR: The aim of this research is to provide comprehensive literature review over face recognition along with its applications and some of the major findings are given in conclusion.
Abstract: With the rapid growth in multimedia contents, among such content face recognition has got much attention especially in past few years. Face as an object consists of distinct features for detection; therefore, it remains most challenging research area for scholars in the field of computer vision and image processing. In this survey paper, we have tried to address most endeavoring face features such as pose invariance, aging, illuminations and partial occlusion. They are considered to be indispensable factors in face recognition system when realized over facial images. This paper also studies state of the art face detection techniques, approaches, viz. Eigen face, Artificial Neural Networks (ANN), Support Vector Machines (SVM), Principal Component Analysis (PCA), Independent Component Analysis (ICA), Gabor Wavelets, Elastic Bunch Graph Matching, 3D morphable Model and Hidden Markov Models. In addition to the aforementioned works, we have mentioned different testing face databases which include AT & T (ORL), AR, FERET, LFW, YTF, and Yale, respectively for results analysis. However, aim of this research is to provide comprehensive literature review over face recognition along with its applications. And after in depth discussion, some of the major findings are given in conclusion.

Journal ArticleDOI
TL;DR: In this paper, the authors used onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, which will drastically increase the chances of survival for emergency victims and also help save emergency services time and resources.
Abstract: A large number of deaths are caused by Traffic accidents worldwide. The global crisis of road safety can be seen by observing the significant number of deaths and injuries that are caused by road traffic accidents. In many situations the family members or emergency services are not informed in time. This results in delayed emergency service response time, which can lead to an individual’s death or cause severe injury. The purpose of this work is to reduce the response time of emergency services in situations like traffic accidents or other emergencies such as fire, theft/robberies and medical emergencies. By utilizing onboard sensors of a smartphone to detect vehicular accidents and report it to the nearest emergency responder available and provide real time location tracking for responders and emergency victims, will drastically increase the chances of survival for emergency victims, and also help save emergency services time and resources.

Journal ArticleDOI
TL;DR: This study proposes the fixed-point (16-bit) implementation of CNN-based object detection model: Tiny-Yolo-v2 on Cyclone V PCIe Development Kit FPGA board using High-Level-Synthesis (HLS) tool: OpenCL and achieves a peak performance of 21 GOPs under 100 MHz working frequency.
Abstract: Deep Convolutional Neural Network (CNN) algorithm has recently gained popularity in many applications such as image classification, video analytic and object detection. Being compute-intensive and memory expensive, CNN-based algorithms are hard to be implemented on the embedded device. Although recent studies have explored the hardware implementation of CNN-based object classification models such as AlexNet and VGG, there is still a rare implementation of CNN-based object detection model on Field Programmable Gate Array (FPGA). Consequently, this study proposes the fixed-point (16-bit) implementation of CNN-based object detection model: Tiny-Yolo-v2 on Cyclone V PCIe Development Kit FPGA board using High-Level-Synthesis (HLS) tool: OpenCL. Considering FPGA resource constraints in term of computational resources, memory bandwidth, and on-chip memory, a data pre-processing approach is proposed to merge the batch normalization into convolution layer. To the best of our knowledge, this is the first implementation of Tiny-Yolo-v2 object detection algorithm on FPGA using Intel FPGA Software Development Kit (SDK) for OpenCL. Finally, the proposed implementation achieves a peak performance of 21 GOPs under 100 MHz working frequency.

Journal ArticleDOI
TL;DR: Fuzzification is shown to be a capable mathematical approach for modelling traffic and transportation processes and an analysis of the results achieved using Mamdani Fuzzy Inference System to model complex traffic processes is presented.
Abstract: It is estimated that more than half of the world population lives in cities according to (UN forecasts, 2014), so cities are vital. Cities, as we all know facing with complex challenges – for smart cities the outdated traditional planning of transportation, environmental contamination, finance management and security observations are not adequate. The developing framework for smart-city requires sound infrastructure, latest current technology adoption. Modern cities are facing pressures associated with urbanization and globalization to improve quality-of-life of their citizens. A framework model that enables the integration of cloud-data, social network (SN) services and smart sensors in the context of smart cities is proposed. A service-oriented radical framework enables the retrieval and analysis of big data sets stemming from Social Networking (SN) sites and integrated smart sensors collecting data streams for smart cities. Smart cities’ understanding is a broad concept transportation sector focused in this article. Fuzzification is shown to be a capable mathematical approach for modelling traffic and transportation processes. To solve various traffic and transportation problems a detailed analysis of fuzzy logic systems is developed. This paper presents an analysis of the results achieved using Mamdani Fuzzy Inference System to model complex traffic processes. These results are verified using MATLAB simulation.

Journal ArticleDOI
TL;DR: This paper aims to automatically recognize Arabic Sign Language (ArSL) alphabets using an image-based methodology and shows that Histograms of Oriented Gradients (HOG) descriptor outperforms the other considered descriptors.
Abstract: Through history, humans have used many ways of communication such as gesturing, sounds, drawing, writing, and speaking. However, deaf and speaking impaired people cannot use speaking to communicate with others, which may give them a sense of isolation within their societies. For those individuals, sign language is their principal way to communicate. However, most people (who can hear) do not know the sign language. In this paper, we aim to automatically recognize Arabic Sign Language (ArSL) alphabets using an image-based methodology. More specifically, various visual descriptors are investigated to build an accurate ArSL alphabet recognizer. The extracted visual descriptors are conveyed to One-Versus-All Support Vector Machine (SVM). The analysis of the results shows that Histograms of Oriented Gradients (HOG) descriptor outperforms the other considered descriptors. Thus, the ArSL gesture models that are learned by One-Versus-All SVM using HOG descriptors are deployed in the proposed system.

Journal ArticleDOI
TL;DR: A database of moderate size is designed, which encompasses a total of 4488 images, stemming from 102 distinguishing samples for each of the 44 letters in Pashto, and the recognition framework extracts zoning features followed by K-Nearest Neighbour (KNN) and Neural Network (NN) for classifying individual letters.
Abstract: This paper presents an intelligent recognition sys-tem for handwritten Pashto letters. However, handwritten char-acter recognition is challenging due to the variations in shape and style. In addition to that, these characters naturally vary among individuals. The identification becomes even daunting due to the lack of standard datasets comprising of inscribed Pashto letters. In this work, we have designed a database of moderate size, which encompasses a total of 4488 images, stemming from 102 distinguishing samples for each of the 44 letters in Pashto. Furthermore, the recognition framework extracts zoning features followed by K-Nearest Neighbour (KNN) and Neural Network (NN) for classifying individual letters. Based on the evaluation, the proposed system achieves an overall classification accuracy of approximately 70.05% by using KNN, while an accuracy of 72% through NN at the cost of an increased computation time.

Journal ArticleDOI
TL;DR: This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons.
Abstract: The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons.

Journal ArticleDOI
TL;DR: Genetic algorithm is proposed as the alternative solution because genetic algorithm can get optimal hypermeter with reasonable time and the result of implementation shows that genetic algorithms can get the hyper parameter with almost the same result with grid search with faster computational time.
Abstract: Online news is a media for people to get new information. There are a lot of online news media out there and a many people will only read news that is interesting for them. This kind of news tends to be popular and will bring profit to the media owner. That’s why, it is necessary to predict whether a news is popular or not by using the prediction methods. Machine learning is one of the popular prediction methods we can use. In order to make a higher accuracy of prediction, the best hyper parameter of machine learning methods need to be determined. Determining the hyper parameter can be time consuming if we use grid search method because grid search is a method which tries all possible combination of hyper parameter. This is a problem because we need a quicker time to make a prediction of online news popularity. Hence, genetic algorithm is proposed as the alternative solution because genetic algorithm can get optimal hypermeter with reasonable time. The result of implementation shows that genetic algorithm can get the hyper parameter with almost the same result with grid search with faster computational time. The reduction in computational time is as follows: Support Vector Machine is 425.06%, Random forest is 17%, Adaptive Boosting is 651.06%, and lastly K - Nearest Neighbour is 396.72%.

Journal ArticleDOI
TL;DR: An Arabic version that is a part of the Flickr and MS COCO caption dataset is built and a generative merge model for Arabic image captioning based on a deep RNN-LSTM and CNN model is developed.
Abstract: The automatic generation of correct syntaxial and semantical image captions is an essential problem in Artificial Intelligence. The existence of large image caption copra such as Flickr and MS COCO have contributed to the advance of image captioning in English. However, it is still behind for Arabic given the scarcity of image caption corpus for the Arabic language. In this work, an Arabic version that is a part of the Flickr and MS COCO caption dataset is built. Moreover, a generative merge model for Arabic image captioning based on a deep RNN-LSTM and CNN model is developed. The results of the experiments are promising and suggest that the merge model can achieve excellent results for Arabic image captioning if a larger corpus is used.

Journal ArticleDOI
TL;DR: F fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) multi-criteria decision theory is incorporated in order to help the decision makers to improve the efficiency of their decision processes, so that they will arrive at the best solution in due course.
Abstract: X-rays are ionizing radiation of very high energy, which are used in the medical imaging field to produce images of diagnostic importance. X-ray-based imaging devices are machines that send ionizing radiation to the patient’s body, and obtain an image which can be used to effectively diagnose the patient. These devices serve the same purpose, only that some are the advanced form of the others and are used for specialized radiological exams. These devices have image quality parameters which need to be assessed in order to portray the efficiency, potentiality and negativity of each. The parameters include sensitivity and specificity, radiation dose delivered to the patient, cost of treatment and machine. The parameters are important in that they affect the patient, the hospital management and the radiation worker. Therefore, this paper incorporates these parameters into fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation) multi-criteria decision theory in order to help the decision makers to improve the efficiency of their decision processes, so that they will arrive at the best solution in due course.

Journal ArticleDOI
TL;DR: An approach for breast cancer detection and classification in histopathological images relies on a deep convolutional neural networks (CNN), which is pretrained on an auxiliary domain with very large labelled images, and coupled with an additional network composed of fully connected layers.
Abstract: Computer based analysis is one of the suggested means that can assist oncologists in the detection and diagnosis of breast cancer. On the other hand, deep learning has been promoted as one of the hottest research directions very recently in the general imaging literature, thanks to its high capability in detection and recognition tasks. Yet, it has not been adequately suited to the problem of breast cancer so far. In this context, I propose in this paper an approach for breast cancer detection and classification in histopathological images. This approach relies on a deep convolutional neural networks (CNN), which is pretrained on an auxiliary domain with very large labelled images, and coupled with an additional network composed of fully connected layers. The network is trained separately with respect to various image magnifications (40x, 100x, 200x and 400x). The results presented in the patient level achieved promising scores compared to the state of the art methods.

Journal ArticleDOI
TL;DR: This review will serve the researchers to analyze the latest work on rainfall prediction with the focus on data mining techniques and also will provide a baseline for future directions and comparisons.
Abstract: Rainfall prediction is one of the challenging tasks in weather forecasting. Accurate and timely rainfall prediction can be very helpful to take effective security measures in advance regarding: ongoing construction projects, transportation activities, agricultural tasks, flight operations and flood situation, etc. Data mining techniques can effectively predict the rainfall by extracting the hidden patterns among available features of past weather data. This research contributes by providing a critical analysis and review of latest data mining techniques, used for rainfall prediction. Published papers from year 2013 to 2017 from renowned online search libraries are considered for this research. This review will serve the researchers to analyze the latest work on rainfall prediction with the focus on data mining techniques and also will provide a baseline for future directions and comparisons.