scispace - formally typeset
Search or ask a question

Showing papers presented at "Intelligent Systems Design and Applications in 2016"


Book ChapterDOI
14 Dec 2016
TL;DR: This work focuses on navigation strategy with obstacles avoidance and victims localization based on swarm theory, and shows how the swarm theory meets the requirements of navigation and search operations of cooperative robots.
Abstract: An important application of cooperative robotics is search and rescue of victims in disaster zones. The cooperation between robots requires multiple factors that need to be taken into consideration such as communication between agents, distributed control, power autonomy, cooperation, navigation strategy, locomotion, among others. This work focuses on navigation strategy with obstacles avoidance and victims localization. The strategy used for navigation is based on swarm theory where each robot is an swarm agent. The calculation of attraction and repulsion forces used to keep the swarm compact is used to avoid obstacles and attract the swarm to the victim zones. Additionally, an agent separation behavior is added, so the swarm can leave behind the agents, who found victims so these can support the victims by transmitting their location to a rescue team. Several experiments were performed to test navigation, obstacle avoidance and victims search. The results show how the swarm theory meets the requirements of navigation and search operations of cooperative robots.

25 citations


Book ChapterDOI
14 Dec 2016
TL;DR: The proposed feature selection approach is a promising alternative to activity recognition on smartphones and is able to speed up the original MLM while holding equivalent accuracy.
Abstract: Recognition of human activities aims a wide diversity of applications. However, identifying complicated activities continues a challenging and active research area. In this work, we assess a new approach of feature selection for human activity recognition. For the task, we also compare state-of-the-art classifiers, e.g., Bayes classifier, kNN, MLP, SVM, MLM and MLM-NN. Based on the experiments, the MLM-NN is able to speed up the original MLM while holding equivalent accuracy. MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smartphones.

21 citations


Book ChapterDOI
14 Dec 2016
TL;DR: Different techniques of outlier detection in the data streams are reviewed and different approaches based on these techniques are described in order to establish a comparative study based on different criterion.
Abstract: Generally, extracting only expected knowledge from data is not sufficient since unexpected ones can hide useful information concerning the data behavior. These information can be further used to optimize the current state. This has lead to the outlier detection. It refers to the data mining task that aims to find abnormal points or sequence of data hidden in the dataset. In fact, due to the emergence of new technologies, applications often generate and consume data in form of streams. This data differs from the static one. Therefore, traditional techniques cannot be used. Hence, convenient ones suitable to the data stream nature must be applied. In this paper, we will review different techniques of outlier detection in the data streams. In addition, we shall describe different approaches based on these techniques in order to establish a comparative study based on different criterion. This study aims to help users and facilitates the choice of the appropriate algorithm for a certain context.

20 citations


Book ChapterDOI
14 Dec 2016
TL;DR: Hausdorff distance is proposed for computing the differences between the results of hyper-parameter tuning on two samples of different size and some interesting relations between sample sizes and results ofhyper-parameters tuning are revealed.
Abstract: Hyper-parameter tuning is one of the crucial steps in the successful application of machine learning algorithms to real data. In general, the tuning process is modeled as an optimization problem for which several methods have been proposed. For complex algorithms, the evaluation of a hyper-parameter configuration is expensive and their runtime is speed up through data sampling. In this paper, the effect of sample sizes to the results of hyper-parameter tuning process is investigated. Hyper-parameters of Support Vector Machines are tuned on samples of different sizes generated from a dataset. Hausdorff distance is proposed for computing the differences between the results of hyper-parameter tuning on two samples of different size. 100 real-world datasets and two tuning methods (Random Search and Particle Swarm Optimization) are used in the experiments revealing some interesting relations between sample sizes and results of hyper-parameter tuning which open some promising directions for future investigation in this direction.

14 citations


Book ChapterDOI
14 Dec 2016
TL;DR: This paper proposes an approach to learn OWL ontology from data in MongoDB database and describes a tool implementing transformation rules.
Abstract: Ontologies provide shared and reusable pieces of knowledge about a specific domain. Building an ontology by hand is a very hard and prone to errors task. Ontology learning from existing resources provides a good solution to this issue. Databases are widely used to store data. They were often considered as the most reliable sources for knowledge extraction. NOSQL databases are more and more used to store data. MongoDB database is emerging as the fastest growing NOSQL database in the world. It belongs to the document oriented databases variant. This paper proposes an approach to learn OWL ontology from data in MongoDB database and describes a tool implementing transformation rules.

13 citations


Book ChapterDOI
14 Dec 2016
TL;DR: The performance of two adaptive methods, Fuzzy c-means (FCM) and Self-organizing maps (SOM), are explored to simplify the interpretation of gait data, provided by a secondary dataset of 90 subjects, subdivided into six groups.
Abstract: Human gait corresponds to the physiological way of locomotion, which can be affected by several injuries. Thus, gait analysis plays an important role in observing kinematic and kinetic parameters of the joints involved with such movement pattern. Due to the complexity of such analysis, this paper explores the performance of two adaptive methods, Fuzzy c-means (FCM) and Self-organizing maps (SOM), to simplify the interpretation of gait data, provided by a secondary dataset of 90 subjects, subdivided into six groups. Based on inertial measurement units (IMU) data, two kinematic features, average cycle time and cadence, were used as inputs to the adaptive algorithms. Considering the similarities among the subjects of such database, our experiments show that FCM presented a better performance than SOM. Despite the misplacement of subjects into unexpected clusters, this outcome implies that FCM is rather sensitive to slight differences in gait analysis. Nonetheless, further trials with the aforementioned methods are necessary, since more gait parameters and a greater sample could reveal an undercover variation within the proper walking pattern.

13 citations


Book ChapterDOI
14 Dec 2016
TL;DR: An improved version of CARMEN, a tomographic reconstructor based on machine learning, is presented, and the performing time of two dedicated neural network frameworks, Torch and Theano, is compared, with significant improvements on the training and execution times of the neural networks due to calculations on GPU.
Abstract: Correction of atmospheric turbulences with the use of guide stars as reference, is one of the most relevant issues of adaptive optics (AO). This is addressed with tomographic techniques such as Multi-object adaptive optics (MOAO). Next generations of extremely large telescopes, will require improvements in computational capabilities of real time control systems. An improved version of CARMEN, a tomographic reconstructor based on machine learning, is presented here. The performing time of two dedicated neural network frameworks, Torch and Theano, is compared, with significant improvements on the training and execution times of the neural networks due to calculations on GPU. Also, the differences between both frameworks are discussed.

12 citations


Book ChapterDOI
14 Dec 2016
TL;DR: This paper presents an ongoing Ambient Intelligent decision support system development that aims to provide assistance on the creation on standard work procedures that assure production quantity and efficiency by means of ambient intelligence, optimization heuristics and machine learning in the context of a large organization.
Abstract: Scheduling production instructions in a manufacturing facility is key to assure a efficient process that assures the desired product quantities are produced in time, with quality and with the right resources. An efficient production avoids the creation of downstream delays, and early completion which both can be detrimental if storage space is limited and contracted quantities are important. Therefore, the production, planning and control of manufacturing is increasingly more difficult as family products increases. This paper presents an ongoing Ambient Intelligent decision support system development that aims to provide assistance on the creation on standard work procedures that assure production quantity and efficiency by means of ambient intelligence, optimization heuristics and machine learning in the context of a large organization.

11 citations


Book ChapterDOI
14 Dec 2016
TL;DR: The architecture of the intelligent congestion prediction system based on the ANN and the data fusion is presented, which does not take into account only the historical GPS data but also the real time unpredictable events which have impacts on traffic jams such as accidents.
Abstract: Road traffic jam is one of the major problems related to transportation field in big cities around the globe. The main purpose of our article is providing drivers with an intelligent system to predict the congestion states of the roads. In this paper, we present the architecture of our intelligent congestion prediction system based on the ANN and the data fusion. Our system does not take into account only the historical GPS data but also the real time unpredictable events which have impacts on traffic jams such as accidents. ANN has demonstrated its efficiency in forecasting traffic congestion. The fusion of predicted congestion state, the real time GPS information and the anomalous events using decisional tree has more improved the results. A real time mobile application is provided to drivers in order to help them to discover the traffic state of their destination. The model has been evaluated and validated using big GPS datasets gathered from vehicles circulating in very crowded urban city in Tunisian territory.

11 citations


Book ChapterDOI
14 Dec 2016
TL;DR: This survey, focus firstly, on data warehouse architecture, details the changes in the Extract-Transform-Load process to deal with real time data warehousing, and sketches the integration data in thereal time data warehouse.
Abstract: The Traditional data warehouse did not contain data as today. Hence, it is difficult to retrieve these data and treat them. Furthermore, its content is not updated, which may lead to bad decisions. Data are typically loaded from conventional operational systems. Given that today’s decisions, in the business world, are becoming more and more in real time. Thus, the system supporting these decisions is to be held. In this paper, we are interested in giving a survey on data warehousing starting from a traditional data warehouse to a real time data warehouse. This survey, focus firstly, on data warehouse architecture. Secondly, it details the changes in the Extract-Transform-Load process to deal with real time data warehousing. Thirdly, it sketches the integration data in the real time data warehouse. Finally, a comparative study concerning the real data warehouse approaching is also presented in this paper.

11 citations


Book ChapterDOI
14 Dec 2016
TL;DR: This paper proposes a semantically rich conceptualization for describing a SBP organized in a new Business Process Meta-model for Knowledge Identification (BPM4KI), in order to develop a rich and expressive graphical representation of SBPs to identify and localize the crucial knowledge.
Abstract: Knowledge development in organizations relies on Sensitive Business Processes (SBPs), which are characterized by a high complexity and dynamism in their execution, high number of critical activities with intensive acquisition, sharing, storage and (re)use of very specific crucial knowledge, diversity of knowledge sources, and high degree of collaboration among experts. In this paper, we propose a semantically rich conceptualization for describing a SBP organized in a new Business Process Meta-model for Knowledge Identification (BPM4KI), in order to develop a rich and expressive graphical representation of SBPs to identify and localize the crucial knowledge. BPM4KI covers all aspects of business process modeling: the functional, organizational, behavioral, informational, intentional and knowledge perspectives. We focus more specifically on Knowledge Perspective which has not yet evolved into BP models. This perspective is semantically rich and well founded is on the «core» domain ontologies. Besides, we evaluate the relevance of some proposed concepts through a real SBP scenario from medical domain in the context of the organization of protection of the motor disabled people of Sfax-Tunisia.

Book ChapterDOI
14 Dec 2016
TL;DR: This research proposed Elman Recurrent Neural Network (ERNN) to forecast the Mackey-Glass time series elements and experimental results show that this scheme outperforms other state-of-art studies.
Abstract: Forecasting is an important data analysis technique that aims to study historical data in order to explore and predict its future values. In fact, to forecast, different methods have been tested and applied from regression to neural network models. In this research, we proposed Elman Recurrent Neural Network (ERNN) to forecast the Mackey-Glass time series elements. Experimental results show that our scheme outperforms other state-of-art studies.

Book ChapterDOI
14 Dec 2016
TL;DR: The FLC algorithm shows good results with an average accuracy of 98% and average harmonic means of 98%.
Abstract: This paper proposes a new method of lung segmentation in chest Computerized Tomography (CT) images called Follower of Lung Contour (FLC). This method works as follows: firstly, the image pixels are classified as pulmonary or not through an Artificial Neural Network (ANN) Multilayer Perceptron (MLP) based on pulmonary radiologic densities. After this, the lung detection is made based on achieved through the Border Following Algorithm together with predetermined rules that consider the detected objects area and positioning on the image. The proposed method validation is performed considering as Gold Standard a manual segmentation realized by a pulmonologist at Walter Cantidio Hospital of Federal University of Ceara. Moreover, 30 chest CT images were used, in which 10 are from patients diagnosed with Fibrosis, 10 are from patients with Chronic Obstructive Pulmonary Disease (COPD) and 10 are from healthy patients. The FLC results are compared with six other segmentation methods results using the Gold Standard as reference. Thus, the FLC algorithm shows good results with an average accuracy of 98% and average harmonic means of 98%. Furthermore, it can be concluded that this method may be part of a system to aid in medical diagnosis on Pulmonology.

Book ChapterDOI
14 Dec 2016
TL;DR: This paper adapt BBO and GWO for test suite prioritization and minimization and evaluate their performance with other nature inspired meta-heuristics.
Abstract: Real World is filled with various hard and complex problems. One such complex problem is an optimization problem. Optimization has been an active area of research for several decades. Optimized solutions are hard to find so there are no deterministic algorithms that can find exact solution in polynomial time. In large domain of applications of intelligence techniques we are interested in exploring the application of Biographical Based Optimization (BBO) and Grey Wolf Optimizer (GWO) meta-heuristic algorithm to the domain of software testing. The GWO mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. In this paper we adapt BBO and GWO for test suite prioritization and minimization and evaluate their performance with other nature inspired meta-heuristics.

Book ChapterDOI
14 Dec 2016
TL;DR: The results show that combining K-means clustering with supervised classification by this methodology does not always improve the classification performance, and for the cases that the classifiers performance is improved by clustering, the performance of classifiers in terms of accuracy is slightly increased.
Abstract: Nowadays emails have been an easy and fast tool of communication among people. As a result, filtering unsolicited/spam emails has become a very important challenge to achieve. Recently there has been some research work in text mining that combines text clustering with classification to improve the classification performance. In this paper, we investigate the effect of combining text clustering using K-means algorithm with various supervised classification mechanisms on improving the performance of classification of emails into spam or non-spam. The conjunction of clustering and classification mechanisms is carried out by adding extra features from the clustering step to the feature space used for classification. Our results show that combining K-means clustering with supervised classification by this methodology does not always improve the classification performance. Moreover, for the cases that the classifiers performance is improved by clustering, we found that the performance of classifiers in terms of accuracy is slightly increased with a very small amount that does not meet the increase in the time taken for building a learning model that combines both mechanisms. The result of our experiment has been shown using the Enron-Spam datasets.

Book ChapterDOI
14 Dec 2016
TL;DR: A new system for online Arabic handwriting recognition based on beta-elliptic model which allow to segment the trajectory into segments called strokes by inspecting the extremums points of velocity profile and extract their dynamic and geometric profiles to train the Time Delay Neural Network (TDNN).
Abstract: Handwriting recognition is an interesting part in pattern recognition field. In the last decade, several approaches are focused on online handwriting recognition because the very rapid growth of new technologies in the field of data entry. In this paper, we propose a new system for online Arabic handwriting recognition based on beta-elliptic model which allow to segment the trajectory into segments called strokes by inspecting the extremums points of velocity profile and extract their dynamic and geometric profiles. These strokes are used to train the Time Delay Neural Network (TDNN) which is able to represent the sequential aspect of input data. To evaluate our method, we have used a total of 25000 Arabic letters from the LMCA database. Our experimental results demonstrate the effectiveness of our proposed method and show recognition rates exceeds the 95%.

Book ChapterDOI
14 Dec 2016
TL;DR: Two new mechanisms for the Fish School Search algorithm are introduced aiming to increase weight parameters reliability and also to include elitist behavior to improve the search ability of its original and niching versions.
Abstract: In this work we introduce two new mechanisms for the Fish School Search algorithm in order to improve the search ability of its original and niching versions. Two modifications in the usual operators are proposed aiming to increase weight parameters reliability and also to include elitist behavior. Five benchmark optimization problems were employed to evaluate the effectiveness of the modifications proposed. We analyze the convergence curves and also the minimum mean fitness obtained by each version. The results show that the proposed mechanisms improved the convergence of the niching version of the Fish School Search algorithm.

Book ChapterDOI
14 Dec 2016
TL;DR: The Differential Evolution algorithm employing two simple diversification strategies known as generation gap and Gaussian perturbation is applied to solve the protein structure prediction problem in the backbone and side-chain model and achieves competitive results.
Abstract: The protein structure prediction is considered as one of the most important open problems in biology and bioinformatics due the huge amount of plausible shapes that a protein can assume. The objective of this paper is to apply the Differential Evolution (DE) algorithm employing two simple diversification strategies known as generation gap and Gaussian perturbation to solve the protein structure prediction problem in the backbone and side-chain model. To test our approaches the 1PLW, 1ZDD and 1CRN proteins were used and the standard DE algorithm was compared with DE using the diversification approaches and with some state-of-art algorithms. Also, the genotypic diversity was analyzed during the algorithm run, showing the impacts generated by the diversification mechanisms. Despite its simplicity, the proposed approaches achieved competitive results.

Book ChapterDOI
14 Dec 2016
TL;DR: Experimental results prove that extracted attributes from finger vein can define the gender and the age class, and proposed age and gender classification process gives a recognition rate of 98% for gender classification.
Abstract: The main goal of this paper is to build a system able to recognize the age range and the gender of individuals from their venous network characteristic. Accordingly, we develop an algorithm able to detect changes related to aging. Proposed age and gender recognition system is composed by 4 key steps: image acquisition, image preprocessing, feature extraction and age/gender classification. Image preprocessing is established by ROI extraction and image enhancement. ROI extraction separates the informative region from finger vein image. For image enhancement, we use Guided Filter based Singe Scale Retinex (GFSSR) method. In feature extraction step, we implement the LBP descriptor in order to characterize venous texture from finger veins. Our study is based on MMCBNU_6000 finger vein database. Experimental results prove that extracted attributes from finger vein can define the gender and the age class. Proposed age and gender classification process gives a recognition rate of 98% for gender classification and a recognition rate of 99.67%, 99.78% and 97.33% for respectively 2, 3 and 4 classes, for age classification.

Book ChapterDOI
14 Dec 2016
TL;DR: This paper proposes a new approach called BigDimETL (Big Dimensional ETL) that deals with ETL (Extract-Transform-Load) development process taking into account the MultiDimensional Structure (MDS) through a MapReduce paradigm.
Abstract: With the broad range of data available on the World Wide Web and the increasing use of social media such as Facebook, Twitter, YouTube, etc. a “Big Data” notion has emerged. This latter has become an important aspect in nowadays business since it is full of important knowledge that is crucial for effective decision making. However, this kind of data brings with it new problems and challenges for the Decision Support System (DSS) that must be addressed. In this paper, we propose a new approach called BigDimETL (Big Dimensional ETL) that deals with ETL (Extract-Transform-Load) development process. Our approach focuses on integrating Big Data taking into account the MultiDimensional Structure (MDS) through a MapReduce paradigm.

Book ChapterDOI
14 Dec 2016
TL;DR: A user ontology profiling in social networks is proposed by using a framework containing a Facebook application written in PHP, which accumulates all the equivalent shared data and stocks it in a not only SQL database Cassandra used for further analysis.
Abstract: In a relatively short period of time, we have observed the explosion of social network platforms which have acquired a prominent role in the people’s daily life. In this context, each Facebook user can easily access and control the data shared on his/her corresponding profile. However, despite the large amount of information contained in Facebook user profile, this information is unstructured and not organized. In this paper, we propose a user ontology profiling in social networks by using a framework containing a Facebook application written in PHP, which accumulates all the equivalent shared data and stocks it in a not only SQL database Cassandra used for further analysis.

Book ChapterDOI
14 Dec 2016
TL;DR: Mathematically, it is shown that the Dove Real Beauty Sketches campaign was a viral epidemic and can be leveraged and optimized by epidemiological and mathematical modeling, which offer important guidelines to maximize the impact of a viral message and minimize the uncertainty related to the conception and outcome of new marketing campaigns.
Abstract: Nowadays, the interest in analyze and study the behavior of uncontrollable nature phenomena related to the impact of marketing campaigns is an action of prime importance to prevent chaotic dynamics. In this paper we assess the influence of Dynamical Systems theory and Mathematical Epidemiology on a real viral marketing campaign: Dove Real Beauty Sketches, based on a SIR epidemiological model. Motivated by the overwhelming success of this campaign, we study the mathematical properties and dynamics of the campaign real data - from the parameters estimation and its sensitivity to the stability of the mathematical model, simulated in Matlab. Mathematically, we show not only that the campaign was a viral epidemic, but also that it can be leveraged and optimized by epidemiological and mathematical modeling, which offer important guidelines to maximize the impact of a viral message and minimize the uncertainty related to the conception and outcome of new marketing campaigns.

Book ChapterDOI
14 Dec 2016
TL;DR: This paper presents a framework where steps for improving the quality of business processes for the purpose of quality management in the higher education domain and illustrates these ideas with a real business process model related to the tracking of curriculum offers process more commonly named “habilitation process”.
Abstract: In recent years, modeling and improving the quality of business processes have been increasing interest in several domains, especially in higher education. In fact, quality of business process models could be the object of various interpretations; it must be treated in the most objective way, by using measures. Consequently, this paper presents a framework where we apply these steps for the purpose of quality management in the higher education domain. After that, we illustrate these ideas with a real business process model related to the tracking of curriculum offers process more commonly named “habilitation process”.

Book ChapterDOI
14 Dec 2016
TL;DR: This paper analyzes two types of neighbor’s selection metrics used in the field of recommendation in the literature, and defines them and reviews different proposed metrics.
Abstract: Recommender systems suggest the most appropriate items to users in order to help customers to find the most relevant items and facilitate sales. Collaborative filtering recommendation algorithm is the most successful technique for recommendation. In view of the fact that collaborative filtering systems depend on neighbors as the source of information, the recommendation quality of this approach depends on the neighbor’s selection. However, selecting neighbors can either stem from similarity or trust metrics. In this paper, we analyze these two types of neighbor’s selection metrics used in the field of recommendation in the literature. For each type, we first define it and then review different proposed metrics.

Book ChapterDOI
14 Dec 2016
TL;DR: The obtained results showed that the proposed model and solution method gives better results than a bi-objective model that considers only utility and stability measures and the classical makespan model.
Abstract: This paper proposes a multi-objective optimisation model and particle swarm optimisation solution method for the robust dynamic scheduling of permutation flow shop in the presence of uncertainties. The proposed optimisation model for robust scheduling considers utility, stability and robustness measures to generate robust schedules that minimise the effect of different real-time events on the planned schedule. The proposed solution method is based on a predictive-reactive approach that uses particle swarm optimisation to generate robust schedules in the presence of real-time events. The evaluation of both the optimisation model and solution method are conducted considering different types of disruptions including machine breakdown and new job arrival. The obtained results showed that the proposed model and solution method gives better results than a bi-objective model that considers only utility and stability measures [1] and the classical makespan model.

Book ChapterDOI
14 Dec 2016
TL;DR: In this paper DABC has been compared with SA in 30 academic benchmark instances of the weighted tardiness problem and it is demonstrated that DABC is more prone to find near-optimum solutions.
Abstract: Meta-Heuristics (MH) are the most used optimization techniques to approach Complex Combinatorial Problems (COPs). Their ability to move beyond the local optimums make them an especially attractive choice to solve complex computational problems, such as most scheduling problems. However, the knowledge of what Meta-Heuristics perform better in certain problems is based on experiments. Classic MH, as the Simulated Annealing (SA) has been deeply studied, but newer MH, as the Discrete Artificial Bee Colony (DABC) still need to be examined in more detail. In this paper DABC has been compared with SA in 30 academic benchmark instances of the weighted tardiness problem (1||Σw j T j ). Both MH parameters were fine-tuned with Taguchi Experiments. In the computational study DABC performed better and the subsequent statistical study demonstrated that DABC is more prone to find near-optimum solutions. On the other hand SA appeared to be more efficient.

Book ChapterDOI
14 Dec 2016
TL;DR: A new hybrid approach is proposed by integrating two soft computing techniques that are genetic algorithm (GA) and particle swarm optimization (PSO) into Fast SLAM to overcome the particle depletion problem occur by improving the FastSLAM accuracy in terms of robot and landmark set position estimation.
Abstract: FastSLAM algorithm is one of the introduced Simultaneous Localization and Mapping (SLAM) algorithms for autonomous mobile robot. It decomposes the SLAM problem into one distinct localization problem and a collection of landmarks estimation problems. In recent discovery, FastSLAM suffers particle depletion problem which causes it to degenerate over time in terms of accuracy. In this work, a new hybrid approach is proposed by integrating two soft computing techniques that are genetic algorithm (GA) and particle swarm optimization (PSO) into FastSLAM. It is developed to overcome the particle depletion problem occur by improving the FastSLAM accuracy in terms of robot and landmark set position estimation. The experiment is conducted in simulation where the result is evaluated using root mean square error (RMSE) analysis. The experiment result shows that the proposed hybrid approach able to minimize the FastSLAM problem by reducing the degree of error occurs (RMSE value) during robot and landmark set position estimation.

Book ChapterDOI
14 Dec 2016
TL;DR: A prototype called Medical Intuitionistic Fuzzy Expert Decision Support System (MIFEDSS) based on IFL and the Modified Early Warning Score (MEWS) standard score is proposed and the experimental results have been shown the efficiency of the proposed system.
Abstract: Intensive Care Unit (ICU) medical processes can be so complex and unpredictable that physicians sometimes must make decisions based on perception. Both decision support system and Intuitionistic Fuzzy Logic (IFL) techniques can assist doctors to handle this complexity in a safe, harmless and efficient manner. To this end, we propose a prototype called Medical Intuitionistic Fuzzy Expert Decision Support System (MIFEDSS) based on IFL and the Modified Early Warning Score (MEWS) standard score. Moreover, the experimental results have been shown the efficiency of the proposed system.

Book ChapterDOI
14 Dec 2016
TL;DR: This position paper described the perspective of ambient intelligence in creativity support tools specially in the use of creative writing environments, and outlined the writing activity goals and the interaction design goals.
Abstract: The fundamental challenge in developing and evaluating creativity support tools is that we are not able to detect when a person is being creative. In this position paper we described our perspective of ambient intelligence in creativity support tools specially in the use of creative writing environments. Starting with the activity theory, we describe a simple analysis of writing sessions involving 100 students from a higher school, and recorded using a keystroke logging program called Inputlog and the program iTALC to support it. Specifically, we outline the writing activity goals and the interaction design goals.

Book ChapterDOI
14 Dec 2016
TL;DR: The paper will also focus on the opportunities and challenges of using NoSQL graph databases for storing and querying Big Social Data and how graph theory can help to mine information from these data warehouses.
Abstract: Big Data generated from social networking sites is the crude oil of this century. Data warehousing and analysing social actions and interactions can help corporations to capture opinions, suggest friends, recommend products and services and make intelligent decisions that improve customer loyalty. However, traditional data warehouses built on relational databases are unable to handle this massive amount of data. As an alternative, NoSQL (Not only Structured Query Language) databases are gaining popularity when building Big Data Warehouses. The current state of the art of proposed NoSQL data warehouses is captured and discussed in this paper. The paper will also focus on the opportunities and challenges of using NoSQL graph databases for storing and querying Big Social Data and how graph theory can help to mine information from these data warehouses.