scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Advanced Computer Science and Applications in 2015"


Journal ArticleDOI
TL;DR: This paper presents a survey on the techniques used to design Chatbots and a comparison is made between different design techniques from nine carefully selected papers according to the main methods adopted.
Abstract: Human-Computer Speech is gaining momentum as a technique of computer interaction. There has been a recent upsurge in speech based search engines and assistants such as Siri, Google Chrome and Cortana. Natural Language Processing (NLP) techniques such as NLTK for Python can be applied to analyse speech, and intelligent responses can be found by designing an engine to provide appropriate human like responses. This type of programme is called a Chatbot, which is the focus of this study. This paper presents a survey on the techniques used to design Chatbots and a comparison is made between different design techniques from nine carefully selected papers according to the main methods adopted. These papers are representative of the significant improvements in Chatbots in the last decade. The paper discusses the similarities and differences in the techniques and examines in particular the Loebner prize-winning Chatbots.

329 citations


Journal ArticleDOI
TL;DR: Different models, such as topic over time (TOT), dynamic topic models (DTM), multiscale topic tomography, dynamic topic correlation detection, detecting topic evolution in scientific literature, etc. are discussed.
Abstract: Topic models provide a convenient way to analyze large of unclassified text. A topic contains a cluster of words that frequently occur together. A topic modeling can connect words with similar meanings and distinguish between uses of words with multiple meanings. This paper provides two categories that can be under the field of topic modeling. First one discusses the area of methods of topic modeling, which has four methods that can be considerable under this category. These methods are Latent semantic analysis (LSA), Probabilistic latent semantic analysis (PLSA), Latent Dirichlet allocation (LDA), and Correlated topic model (CTM). The second category is called topic evolution models, which model topics by considering an important factor time. In the second category, different models are discussed, such as topic over time (TOT), dynamic topic models (DTM), multiscale topic tomography, dynamic topic correlation detection, detecting topic evolution in scientific literature, etc.

243 citations


Journal ArticleDOI
TL;DR: An architecture that integrates internet of things with agent technology into a single platform where the agent technology handles effective communication and interfaces among a large number of heterogeneous highly distributed, and decentralized devices within the IoT.
Abstract: In recent years popularity of private cars is getting urban traffic more and more crowded. As result traffic is becoming one of important problems in big cities in all over the world. Some of the traffic concerns are congestions and accidents which have caused a huge waste of time, property damage and environmental pollution. This research paper presents a novel intelligent traffic administration system, based on Internet of Things, which is featured by low cost, high scalability, high compatibility, easy to upgrade, to replace traditional traffic management system and the proposed system can improve road traffic tremendously. The Internet of Things is based on the Internet, network wireless sensing and detection technologies to realize the intelligent recognition on the tagged traffic object, tracking, monitoring, managing and processed automatically. The paper proposes an architecture that integrates internet of things with agent technology into a single platform where the agent technology handles effective communication and interfaces among a large number of heterogeneous highly distributed, and decentralized devices within the IoT. The architecture introduces the use of an active radio-frequency identification (RFID), wireless sensor technologies, object ad-hoc networking, and Internet-based information systems in which tagged traffic objects can be automatically represented, tracked, and queried over a network. This research presents an overview of a framework distributed traffic simulation model within NetLogo, an agent-based environment, for IoT traffic monitoring system using mobile agent technology.

122 citations


Journal ArticleDOI
TL;DR: A new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks and outperform and achieve superior results comparing to the most contemporary approaches.
Abstract: Recently image recognition becomes vital task using several methods. One of the most interesting used methods is using Convolutional Neural Network (CNN). It is widely used for this purpose. However, since there are some tasks that have small features that are considered an essential part of a task, then classification using CNN is not efficient because most of those features diminish before reaching the final stage of classification. In this work, analyzing and exploring essential parameters that can influence model performance. Furthermore different elegant prior contemporary models are recruited to introduce new leveraging model. Finally, a new CNN architecture is proposed which achieves state-of-the-art classification results on the different challenge benchmarks. The experimented are conducted on MNIST, CIFAR-10, and CIFAR-100 datasets. Experimental results showed that the results outperform and achieve superior results comparing to the most contemporary approaches.

85 citations


Journal ArticleDOI
TL;DR: The research proposes a high level architecture for smart city based on a hierarchical model of data storage and defines how different stakeholders will be communicating and offering services to citizens.
Abstract: The concept of smart city was born to provide improved quality of life to citizens. The key idea is to integrate information system services of each domain, such as health, education, transportation, power grid etc., of the city to provide public services to citizens efficiently and ubiquitously. These expectations induce massive challenges and requirements. This research is aimed to highlight key ICT (Information and Communication Technology) challenges related to adaptation of smart city. Realizing the significance of effective data collection, storage, retrieval, and efficient network resource provisioning, the research proposes a high level architecture for smart city. The proposed framework is based on a hierarchical model of data storage and defines how different stakeholders will be communicating and offering services to citizens. The architecture facilitates step by step implementation towards a smart city, integrating services, as they are developed in a timely manner.

78 citations


Journal ArticleDOI
TL;DR: This paper surveys the top security concerns related to cloud computing and describes how it can be used to exploit cloud components and its effect on cloud entities such as providers and users, and the security solutions that must be taken to prevent these threats.
Abstract: Cloud computing enables the sharing of resources such as storage, network, applications and software through internet. Cloud users can lease multiple resources according to their requirements, and pay only for the services they use. However, despite all cloud benefits there are many security concerns related to hardware, virtualization, network, data and service providers that act as a significant barrier in the adoption of cloud in the IT industry. In this paper, we survey the top security concerns related to cloud computing. For each of these security threats we describe, i) how it can be used to exploit cloud components and its effect on cloud entities such as providers and users, and ii) the security solutions that must be taken to prevent these threats. These solutions include the security techniques from existing literature as well as the best security practices that must be followed by cloud administrators.

59 citations


Journal ArticleDOI
TL;DR: This study based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own, assesses the system effectiveness and proposed teaching methodology in general.
Abstract: Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general.

55 citations


Journal ArticleDOI
TL;DR: An overview of some of the methods and approach of feature extraction and selection in handwriting character recognition, and the review of metaheuristic harmony search algorithm (HSA) has provide.
Abstract: The development of handwriting character recognition (HCR) is an interesting area in pattern recognition. HCR system consists of a number of stages which are preprocessing, feature extraction, classification and followed by the actual recognition. It is generally agreed that one of the main factors influencing performance in HCR is the selection of an appropriate set of features for representing input samples. This paper provides a review of these advances. In a HCR, the set of features plays as main issues, as procedure in choosing the relevant feature that yields minimum classification error. To overcome these issues and maximize classification performance, many techniques have been proposed for reducing the dimensionality of the feature space in which data have to be processed. These techniques, generally denoted as feature reduction, may be divided in two main categories, called feature extraction and feature selection. A large number of research papers and reports have already been published on this topic. In this paper we provide an overview of some of the methods and approach of feature extraction and selection. Throughout this paper, we apply the investigation and analyzation of feature extraction and selection approaches in order to obtain the current trend. Throughout this paper also, the review of metaheuristic harmony search algorithm (HSA) has provide.

50 citations


Journal ArticleDOI
TL;DR: Scrum method is a part of the Agile method that is expected to increase the speed and flexibility in software development project management.
Abstract: To maximize the performance, companies conduct a variety of ways to increase the business profit. The work management between one company and the other company is different, so the differences in the management may cause the software to have a different business process. Software development can be defined as creating a new software or fixing the existing one. Technology developments led to increasing demand for software, Industrial Technology (IT) Companies should be able to project well maintenance. The methodology in software development is used in accordance with the company's needs based on the SDLC (Software Development Life Cycle). Scrum method is a part of the Agile method that is expected to increase the speed and flexibility in software development project management.

50 citations


Journal ArticleDOI
TL;DR: A state of the art survey of multi-biometrics benefits, limitations, integration strategies, and fusion levels are discussed in this paper.
Abstract: Multi-biometrics is an exciting and interesting research topic. It is used to recognizing individuals for security purposes; to increase security levels. The recent research trends toward next biometrics generation in real-time applications. Also, integration of biometrics solves some of unimodal system limitations. However, design and evaluation of such systems raises many issues and trade-offs. A state of the art survey of multi-biometrics benefits, limitations, integration strategies, and fusion levels are discussed in this paper. Finally, upon reviewing multi-biometrics approaches and techniques; some open points are suggested to be considered as a future research point of interest.

47 citations


Journal ArticleDOI
TL;DR: The results demonstrated that the proposed model exhibited higher accuracy rates than those of other works on this topic, and the proposed approach achieved classification accuracy, sensitivity, and specificity rates of 99.63%, 99.29% and 99.89%, respectively.
Abstract: Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of time series approaches to the classification of PVC abnormality. Moreover, the performance effects of several dimension reduction approaches were also tested. Experiments were carried out using well-known machine learning methods, including neural networks, k-nearest neighbour, decision trees, and support vector machines. Findings were expressed in terms of accuracy, sensitivity, specificity, and running time for the MIT-BIH Arrhythmia Database. Among the different classification algorithms, the k-NN algorithm achieved the best classification rate. The results demonstrated that the proposed model exhibited higher accuracy rates than those of other works on this topic. According to the experimental results, the proposed approach achieved classification accuracy, sensitivity, and specificity rates of 99.63%, 99.29% and 99.89%, respectively.

Journal ArticleDOI
TL;DR: A comprehensive analysis of the important research works in the field of Arabic sentiment analysis using smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness).
Abstract: Most social media commentary in the Arabic language space is made using unstructured non-grammatical slang Arabic language, presenting complex challenges for sentiment analysis and opinion extraction of online commentary and micro blogging data in this important domain. This paper provides a comprehensive analysis of the important research works in the field of Arabic sentiment analysis. An in-depth qualitative analysis of the various features of the research works is carried out and a summary of objective findings is presented. We used smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness) which is an estimate of the influence of the different approaches used by the authors on the performance scores obtained. To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis. Our results from the analysis have been reported and interpreted for the various performance parameters: accuracy, precision, recall and F-score.

Journal ArticleDOI
TL;DR: This article presents a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET) that can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.
Abstract: In the new interconnected world, we need to secure vehicular cyber-physical systems (VCPS) using sophisticated intrusion detection systems. In this article, we present a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET). By combining static and dynamic detection agents, that can be mounted on central vehicles, and a control center where the alarms about possible attacks on the system are communicated, the proposed DIDS can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.

Journal ArticleDOI
TL;DR: In this article, the authors introduce advanced cloud security technologies and practices as a series of concepts and technology architectures from an industry-centric point of view, followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks.
Abstract: Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems.

Journal ArticleDOI
TL;DR: It is shown that ontology usage in case representation needs improvements to achieve semantic representation and semantic retrieval in CBR system.
Abstract: Case Based Reasoning (CBR) is an important technique in artificial intelligence, which has been applied to various kinds of problems in a wide range of domains. Selecting case representation formalism is critical for the proper operation of the overall CBR system. In this paper, we survey and evaluate all of the existing case representation methodologies. Moreover, the case retrieval and future challenges for effective CBR are explained. Case representation methods are grouped in to knowledge-intensive approaches and traditional approaches. The first group overweight the second one. The first methods depend on ontology and enhance all CBR processes including case representation, retrieval, storage, and adaptation. By using a proposed set of qualitative metrics, the existing methods based on ontology for case representation are studied and evaluated in details. All these systems have limitations. No approach exceeds 53% of the specified metrics. The results of the survey explain the current limitations of CBR systems. It shows that ontology usage in case representation needs improvements to achieve semantic representation and semantic retrieval in CBR system.

Journal ArticleDOI
TL;DR: This article presents mobile energy disseminators (MED), a new concept, that can facilitate EVs to extend their range in a typical urban scenario by exploiting Inter-Vehicle (IVC) communications in order to eco-route electric vehicles taking advantage of the existence of MEDs.
Abstract: Dynamic wireless charging of electric vehicles (EVs) is becoming a preferred method since it enables power exchange between the vehicle and the grid while the vehicle is moving. In this article, we present mobile energy disseminators (MED), a new concept, that can facilitate EVs to extend their range in a typical urban scenario. Our proposed method exploits Inter-Vehicle (IVC) communications in order to eco-route electric vehicles taking advantage of the existence of MEDs. Combining modern communications between vehicles and state of the art technologies on energy transfer, vehicles can extend their travel time without the need for large batteries or extremely costly infrastructure. Furthermore, by applying intelligent decision mechanisms we can further improve the performance of the method.

Journal ArticleDOI
TL;DR: The main objective of this paper is to introduce a high-quality image stitching system with least computation time and concludes that ORB algorithm is the fastest, more accurate, and with higher performance.
Abstract: The construction of a high-resolution panoramic image from a sequence of input overlapping images of the same scene is called image stitching/mosaicing. It is considered as an important, challenging topic in computer vision, multimedia, and computer graphics. The quality of the mosaic image and the time cost are the two primary parameters for measuring the stitching performance. Therefore, the main objective of this paper is to introduce a high-quality image stitching system with least computation time. First, we compare many different features detectors. We test Harris corner detector, SIFT, SURF, FAST, GoodFeaturesToTrack, MSER, and ORB techniques to measure the detection rate of the corrected keypoints and processing time. Second, we manipulate the implementation of different common categories of image blending methods to increase the quality of the stitching process. From experimental results, we conclude that ORB algorithm is the fastest, more accurate, and with higher performance. In addition, Exposure Compensation is the highest stitching quality blending method. Finally, we have generated an image stitching system based on ORB using Exposure Compensation blending method.

Journal ArticleDOI
TL;DR: An image retrieval system that uses local feature descriptors and BoVW model to retrieve efficiently and accurately similar images from standard databases and uses K-Means as a clustering algorithm to build visual vocabulary for the features descriptors that obtained of local descriptors techniques.
Abstract: Image retrieval is still an active research topic in the computer vision field. There are existing several techniques to retrieve visual data from large databases. Bag-of-Visual Word (BoVW) is a visual feature descriptor that can be used successfully in Content-based Image Retrieval (CBIR) applications. In this paper, we present an image retrieval system that uses local feature descriptors and BoVW model to retrieve efficiently and accurately similar images from standard databases. The proposed system uses SIFT and SURF techniques as local descriptors to produce image signatures that are invariant to rotation and scale. As well as, it uses K-Means as a clustering algorithm to build visual vocabulary for the features descriptors that obtained of local descriptors techniques. To efficiently retrieve much more images relevant to the query, SVM algorithm is used. The performance of the proposed system is evaluated by calculating both precision and recall. The experimental results reveal that this system performs well on two different standard datasets.

Journal ArticleDOI
TL;DR: A computer-aided system is proposed for automatic classification of Ultrasound Kidney diseases and a correct classification rate of 97% has been obtained using the multi-scale wavelet-based features.
Abstract: In this paper, a computer-aided system is proposed for automatic classification of Ultrasound Kidney diseases. Images of five classes: Normal, Cyst, Stone, Tumor and Failure were considered. A set of statistical features and another set of multi-scale wavelet-based features were extracted from the region of interest (ROI) of each image and the principal component analysis was performed to reduce the number of features. The selected features were utilized in the design and training of a neural network classifier. A correct classification rate of 97% has been obtained using the multi-scale wavelet-based features.

Journal ArticleDOI
TL;DR: The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational andNon-Relational databases thus highlighting the results obtained during performance comparison tests.
Abstract: The purpose of this paper is to present a comparative study between relational and non-relational database models in a web-based application, by executing various operations on both relational and on non-relational databases thus highlighting the results obtained during performance comparison tests. The study was based on the implementation of a web-based application for population records. For the non-relational database, we used MongoDB and for the relational database, we used MSSQL 2014. We will also present the advantages of using a non-relational database compared to a relational database integrated in a web-based application, which needs to manipulate a big amount of data.

Journal ArticleDOI
TL;DR: A comparative evaluation of the most performant detection techniques in IDS for WSNs, the analyzes and comparisons of the approaches are represented technically, and several recommendations are provided with future directions for this research.
Abstract: Wireless sensor network (WSN) consists of sensor nodes. Deployed in the open area, and characterized by constrained resources, WSN suffers from several attacks, intrusion and security vulnerabilities. Intrusion detection system (IDS) is one of the essential security mechanism against attacks in WSN. In this paper we present a comparative evaluation of the most performant detection techniques in IDS for WSNs, the analyzes and comparisons of the approaches are represented technically, followed by a brief. Attacks in WSN also are presented and classified into several criteria. To implement and measure the performance of detection techniques we prepare our dataset, based on KDD'99, into five step, after normalizing our dataset, we determined normal class and 4 types of attacks, and used the most relevant attributes for the classification process. We propose applying CfsSubsetEval with BestFirst approach as an attribute selection algorithm for removing the redundant attributes. The experimental results show that the random forest methods provide high detection rate and reduce false alarm rate. Finally, a set of principles is concluded, which have to be satisfied in future research for implementing IDS in WSNs. To help researchers in the selection of IDS for WSNs, several recommendations are provided with future directions for this research.

Journal ArticleDOI
TL;DR: The solution, coined ‘AdviseMe’, an intelligent web-based application, provides a reliable, user-friendly interface for the handling of general advisory cases in special degree programmes offered by the Faculty of Science and Technology (FST) at the University of the West Indies (UWI), St. Augustine campus.
Abstract: The traditional academic advising process in many tertiary-level institutions today possess significant inefficiencies, which often account for high levels of student dissatisfaction. Common issues include high student-advisor loads, long waiting periods at advisory offices and the need for advisors to handle a significant number of redundant cases, among others. Utilizing semantic web expert system technologies, a solution was proposed that would complement the traditional advising process, alleviating its issues and inefficiencies where possible. The solution coined ‘AdviseMe’, an intelligent web-based application, provides a reliable, user-friendly interface for the handling of general advisory cases in special degree programmes offered by the Faculty of Science and Technology (FST) at the University of the West Indies (UWI), St. Augustine campus. In addition to providing information on handling basic student issues, the system’s core features include course advising, as well as infor-mation of graduation status and oral exam qualifications. This paper produces an overview of the solution, with special attention being paid to the its inference system exposed via its RESTful Java Web Server (JWS). The system was able to provide sufficient accurate advice for the sample set presented and showed high levels of acceptabil-ity by both students and advisors. Furthermore, its successful implementation demonstrated its ability to enhance the advisory process of any tertiary-level institution with programmes similar to that of FST.

Journal ArticleDOI
TL;DR: The aim of this paper is to find an approach for analyzing Arabic text and then providing statistical information which might be helpful for the people in this research area and to lay out a framework that will be used by researchers in the field of Arabic natural language processing.
Abstract: The Holy Quran is the reference book for more than 1.6 billion of Muslims all around the world Extracting information and knowledge from the Holy Quran is of high benefit for both specialized people in Islamic studies as well as non-specialized people. This paper initiates a series of research studies that aim to serve the Holy Quran and provide helpful and accurate information and knowledge to the all human beings. Also, the planned research studies aim to lay out a framework that will be used by researchers in the field of Arabic natural language processing by providing a ”Golden Dataset” along with useful techniques and information that will advance this field further. The aim of this paper is to find an approach for analyzing Arabic text and then providing statistical information which might be helpful for the people in this research area. In this paper the holly Quran text is preprocessed and then different text mining operations are applied to it to reveal simple facts about the terms of the holy Quran. The results show a variety of characteristics of the Holy Quran such as its most important words, its wordcloud and chapters with high term frequencies. All these results are based on term frequencies that are calculated using both Term Frequency (TF) and Term Frequency-Inverse Document Frequency (TF-IDF) methods.

Journal ArticleDOI
TL;DR: A new hybrid approach for improving MRS is prepared, it consists of Content Based Filtering, Collaborative Filtering (CF), emotions detection algorithm and the algorithm, that presented by matrix, that provides much better recommendations to users.
Abstract: Recommender Systems (RSs) are garnering a significant importance with the advent of e-commerce and e-business on the web. This paper focused on the Movie Recommender System (MRS) based on human emotions. The problem is the MRS need to capture exactly the customer’s profile and features of movies, therefore movie is a complex domain and emotions is a human interaction domain, so difficult to combining together in the new Recommender System (RS). In this paper, we prepare a new hybrid approach for improving MRS, it consists of Content Based Filtering (CBF), Collaborative Filtering (CF), emotions detection algorithm and our algorithm, that presented by matrix. The result of our system provides much better recommendations to users because it enables the users to understand the relation between their emotional states and the recommended movies.

Journal ArticleDOI
TL;DR: The results show that the accuracy of classification by means of NBTree technique had the highest correct value and it could be applied to develop Felder Silverman's learning style while taking into consideration students’ preference.
Abstract: due to growing popularity of E-Learning, personalization has emerged as important need. Differences of learners' abilities and their learning styles have affected the learning outcomes significantly. Meanwhile, with the development of E-Learning technologies, learners can be provided more effective learning environment to optimize their performance. The purpose of this study is to determine the impact of learning styles on learner’s performance in e-learning environment, and use this learning style data to make recommendations for learners, instructors, and contents of online courses. Data analysis in this research represented by user performance gathered from an E-learning platform (Blackboard), where this user performance data is represented by actions performed by platform's users. A 10-fold cross validation was used to create and test the model, and the data was analyzed by the WEKA software. Classification accuracy, MAE, and the ROC area have been observed. The results show that the accuracy of classification by means of NBTree technique had the highest correct value at 69.697% and it could be applied to develop Felder Silverman's learning style while taking into consideration students’ preference. Moreover, students’ performance increased by more than 12%.

Journal ArticleDOI
TL;DR: This work proposes an empirical study on the retinal nerve fiber layer (RNFL) thickness or the ocular a narrative automated glaucoma diagnosis, classification system based on both Grid Color Moment method as a feature vector and back propagation neural network (BPNN) classifier for automated diagnosis.
Abstract: Automated diagnosis of glaucoma disease is focused on the analysis of the retinal images to localize, perceive and evaluate the optic disc. Clinical decision support system (CDSS) is used for glaucoma classification in human eyes. This process depends mainly on the feature type that can be morphological or non-morphological. It is originated in the retinal image analysis technique that used color feature, texture features, extract structure, or contextual. This work proposes an empirical study on a narrative automated glaucoma diagnosis classification system based on both Grid Color Moment method as a feature vector to extract the color features (non-morphological) and neural network classifier. Consequently, these features are fed to the back propagation neural network (BPNN) classifier for automated diagnosis. The proposed system was tested using an open RIM-ONE database with accurate gold standards of the optic nerve head. This work classifies both normal and abnormal defected retina with glaucoma images. The experimental results achieved an accuracy of 87.47%. Thus, the proposed system can detect the early glaucoma stage with good accuracy.

Journal ArticleDOI
TL;DR: Random Forests Model was more accurate than the logistic regression model and decision tree model and it is necessary to build monitoring system which can diagnose Mild Cognitive Impairment at an early stage.
Abstract: Dementia is a geriatric disease which has emerged as a serious social and economic problem in an aging society and early diagnosis is very important for it. Especially, early diagnosis and early intervention of Mild Cognitive Impairment (MCI) which is the preliminary stage of dementia can reduce the onset rate of dementia. This study developed MCI prediction model for the Korean elderly in local communities and provides a basic material for the prevention of cognitive impairment. The subjects of this study were 3,240 elderly (1,502 males, 1,738 females) in local communities over the age of 65 who participated in the Korean Longitudinal Survey of Aging (close) conducted in 2012. The outcome was defined as having MCI and set as explanatory variables were gender, age, level of education, level of income, marital status, smoking, drinking habits, regular exercise more than once a week, monthly average hours of participation in social activities, subjective health, diabetes and high blood pressure. The random Forests algorithm was used to develop a prediction model and the result was compared with logistic regression model and decision tree model. As the result of this study, significant predictors of MCI were age, gender, level of education, level of income, subjective health, marital status, smoking, drinking, regular exercise and high blood pressure. In addition, Random Forests Model was more accurate than the logistic regression model and decision tree model. Based on these results, it is necessary to build monitoring system which can diagnose MCI at an early stage.

Journal ArticleDOI
TL;DR: The results proved that the proposed optimization technique based the ACO-PID controller provides a superior control performance compared to the PI controller.
Abstract: In this article, Load Frequency Control (LFC) of three area unequal interconnected thermal, wind and Hydro power generating units has been developed with Proportional-Integral (PI) controller under MATLAB/SIMULINK environment. Further, the PI controller gains values that optimized using trial and error method with two different objective functions, namely the Integral Time Square Error (ITSE) and the Integral Time Absolute Error (ITAE). The performance of ITAE objective function based PI controller is compared with the ITSE objective function optimized PI controller. Analysis reveals that the ITSE optimized controller gives more superior performance than ITAE based controller during one percent Step Load Perturbation (1% SLP) in area 1 (thermal area). In addition, Proportional–Integral –Derivative (PID) controller is employed to improve the same power system performance. The controller gain values are optimized using Artificial Intelligence technique based Ant Colony Optimization (ACO) algorithm. The simulation performance compares the ACO-PID controller to the conventional PI. The results proved that the proposed optimization technique based the ACO-PID controller provides a superior control performance compared to the PI controller. As the system using the ACO-PID controller yield minimum overshoot, undershoot and settling time compared to the conventional PI controlled equipped system performance.

Journal ArticleDOI
TL;DR: A modern design of a dynamic learning environment that goes along the most recent trends in e-Learning is proposed, and an overall performance superiority of a support vector machine model in evaluating the knowledge levels is illustrated.
Abstract: Electronic Learning has been one of the foremost trends in education so far. Such importance draws the attention to an important shift in the educational paradigm. Due to the complexity of the evolving paradigm, the prospective dynamics of learning require an evolution of knowledge delivery and evaluation. This research work tries to put in hand a futuristic design of an autonomous and intelligent e-Learning system. In which machine learning and user activity analysis play the role of an automatic evaluator for the knowledge level. It is important to assess the knowledge level in order to adapt content presentation and to have more realistic evaluation of online learners. Several classification algorithms are applied to predict the knowledge level of the learners and the corresponding results are reported. Furthermore, this research proposes a modern design of a dynamic learning environment that goes along the most recent trends in e-Learning. The experimental results illustrate an overall performance superiority of a support vector machine model in evaluating the knowledge levels; having 98.6%of correctly classified instances with 0.0069 mean absolute error.

Journal ArticleDOI
TL;DR: Re-UCP (revised use case point) method of effort estimation for software projects is given, which has significantly outperformed the existing UCP and e-U CP effort estimation techniques.
Abstract: At present the most challenging issue that the software development industry encounters is less efficient management of software development budget projections. This problem has put the modern day software development companies in a situation wherein they are dealing with improper requirement engineering, ambiguous resource elicitation, uncertain cost and effort estimation. The most indispensable and inevitable aspect of any software development company is to form a counter mechanism to deal with the problems which leads to chaos. An emphatic combative domain to deal with this problem is to schedule the whole development process to undergo proper and efficient estimation process, wherein the estimation of all the resources can be made well in advance in order to check whether the conceived project is feasible and within the resources available. The basic building block in any object oriented design is Use Case diagrams which are prepared in early stages of design after clearly understanding the requirements. Use Case Diagrams are considered to be useful for approximating estimates for software development project. This research work gives detailed overview of Re-UCP (revised use case point) method of effort estimation for software projects. The Re-UCP method is a modified approach which is based on UCP method of effort estimation. In this research study 14 projects were subjected to estimate efforts using Re-UCP method and the results were compared with UCP and e-UCP models. The comparison of 14 projects shows that Re-UCP has significantly outperformed the existing UCP and e-UCP effort estimation techniques.