scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Innovations in Information Technology in 2015"


Proceedings ArticleDOI
01 Nov 2015
TL;DR: A wearable integrated health-monitoring system based on a multisensor fusion approach that reduces risk, taking into consideration assessments based on the individual and on overall potentially-harmful situations is presented.
Abstract: A wearable integrated health-monitoring system is presented in this paper. The system is based on a multisensor fusion approach. It consists of a chest-worn device that embeds a controller board, an electrocardiogram (ECG) sensor, a temperature sensor, an accelerometer, a vibration motor, a colour- changing light-emitting diode (LED) and a push-button. This multi-sensor device allows for performing biometric and medical monitoring applications. Distinctive haptic feedback patterns can be actuated by means of the embedded vibration motor according to the user's health state. The embedded colour-changing LED is employed to provide the wearer with an additional intuitive visual feedback of the current health state. The push-button provided can be pushed by the user to report a potential emergency condition. The collected biometric information can be used to monitor the health state of the person involved in real-time or to get sensitive data to be subsequently analysed for medical diagnosis. In this preliminary work, the system architecture is presented. As a possible application scenario, the health-monitoring of offshore operators is considered. Related initial simulations and experiments are carried out to validate the efficiency of the proposed technology. In particular, the system reduces risk, taking into consideration assessments based on the individual and on overall potentially-harmful situations.

42 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: The factors that may impact the success of implementing a Big Data system are identified based on the content review and analysis of industry and academic publications and grouped into eight main categories.
Abstract: Big Data systems have significantly changed the possibilities and ways of data processing and analysis. Although many practitioners and scholars have written about the benefits of Big Data systems, little research has been conducted on how to succeed with the use of Big Data. The implementation of Big Data systems is a complex undertaking that requires a new technological and organizational approach. This paper seeks to identify the factors that may impact the success of implementing a Big Data system. The factors are identified based on the content review and analysis of industry and academic publications. We identified 21 implementation factors grouped into eight main categories.

17 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper presents a new approach of clustering geodata for online maps, such as Google Maps, OpenStreetMap and others, that does not need the entire data to start clustering and works in real-time.
Abstract: Nowadays, we have a lot of data produced by social media services, but more and more often these data contain information about a location that gives us the wide range of possibilities to analyze them. Since we can be interested not only in the content, but also in the location where this content was produced. For good analyzing geo-spatial data, we need to find the best approaches for geo clustering. And the best approach means real-time clustering of massive geodata with high accuracy. In this paper, we present a new approach of clustering geodata for online maps, such as Google Maps, OpenStreetMap and others. Clustered geodata based on their location improve visual analysis of them and improve situational awareness. Our approach is the server-side online algorithm that does not need the entire data to start clustering. Also, this approach works in real-time and could be used for clustering of massive geodata for online maps in reasonable time. We implemented the proposed approach to prove the concept, and also, we provided experiments and evaluation of our approach.

14 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: A convolutional neural network architecture is presented for emotion identification in Twitter messages related to sporting events on the 2014 FIFA World Cup that leverages pre-trained word embeddings obtained by unsupervised learning on large text corpora.
Abstract: Twitter has gained increasing popularity over the recent years with users generating an enormous amount of data on a variety of topics every day. Many of these posts contain real-time updates and opinions on ongoing sports games. In this paper, we present a convolutional neural network architecture for emotion identification in Twitter messages related to sporting events. The network leverages pre-trained word embeddings obtained by unsupervised learning on large text corpora. Training of the network is performed on automatically annotated tweets with 7 emotions where messages are labeled based on the presence of emotion-related hashtags on which our approach achieves 55.77% accuracy. The model is applied on Twitter messages for emotion identification during sports events on the 2014 FIFA World Cup. We also present the results of our analysis on three games that had significant impact on Twitter users.

13 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: A novel sentiment analysis method is proposed which added new user metrics to classical Naive Bayes based sentimentAnalysis method and applies it to the finance field and computes a moderate positive correlation between stock market behavior and the sentiment polarity of financial community.
Abstract: Sentiment analysis is a popular research area in computer science. It aims to determine the attitude of a person with respect to some topic, such as his mood or opinion from textual documents generated by the person. With the proliferation of social micro-blogging sites, opinion text has become available in digital forms, thus enabling research on sentiment analysis to both deepen and broaden in different sociological fields, particularly in the finance field. In this paper, we propose a novel sentiment analysis method which added new user metrics to classical Naive Bayes based sentiment analysis method and applies it to the finance field. We also analyze the correlation between the mood of the financial community and the behavior of the stock exchange of Turkey, namely BIST 100 using Spearman's rank correlation coefficient (SRCC) method. Our empirical studies show that the proposed sentiment analysis method (SRCC value 0.5634) computes a moderate positive correlation between stock market behavior and the sentiment polarity of financial community.

13 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: Experimental results indicate that MECOAT presents better results in terms of fitness value, peak signal to noise ratio (PSNR) and robustness in most cases.
Abstract: Image thresholding considered as a popular method for image segmentation. So far, many approaches have been proposed for image thresholding. Maximum entropy thresholding has been widely applied in the literature. This paper proposes a multilevel image thresholding (MECOAT) using cuckoo optimization algorithm (COA). COA is a new nature- based optimization algorithm which is inspired by a bird named cuckoo. This algorithm is based unusual egg laying and breeding of cuckoos. MECOAT tries to maximize entropy criterion. Three different algorithms are compared with MECOAT algorithm: particle swarm optimization, genetic algorithm, and bat algorithm. Experimental results indicate that MECOAT presents better results in terms of fitness value, peak signal to noise ratio (PSNR) and robustness in most cases.

12 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: A model-driven techno-economic approach is introduced in this paper targeting the estimation of economic parameters of cloud service deployment, which is able to assist decision support procedures for cloud users, cloud providers and cloud brokers.
Abstract: Cloud computing has succeeded in transforming the ICT industry, making software and hardware services even more accessible to businesses and establishing an environment for rapid innovation. Since cloud computing is an innovative business model, whose deployment is accompanied by huge investments, a thorough, multilevel cost analysis of provided services is vital. Such an analysis should focus, among others, on demand forecasting for computational resources and financial assessment of cloud computing investments, estimating crucial economic parameters, such as Net Present Value (NPV), Return of Investment (ROI) and Total Cost of Ownership (TCO). Into this context, a model-driven techno-economic approach is introduced in this paper targeting the estimation of economic parameters of cloud service deployment, which is able to assist decision support procedures for cloud users, cloud providers and cloud brokers. SysML is adopted as a modeling language for describing cloud architectures as system-of-systems (SoS), emphasizing cost properties. As an example, the Total Cost of Ownership (TCO) for cloud infrastructure and services is explored. TCO properties are incorporated into SysML cloud models, while cloud providers are facilitated in computing TCO.

12 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper provides a single unified survey that dissects all IEEE 802.15.4 MAC layer attacks known to date and addresses the interrelationships and differences between the attacks following their classification.
Abstract: IEEE 802.15.4 Wireless Sensor Networks (WSNs) possess additional vulnerabilities in comparison with traditional wired and wireless networks, such as broadcast nature of wireless medium, dynamic network topology, resource- constrained nodes, lack of physical safeguards in nodes, and immense network scale. These inherent vulnerabilities present opportunities for attackers to launch novel and more complicated attacks against such networks. For this reason, a thorough investigation of the attacks which can be launched against WSNs is required. This paper provides a single unified survey that dissects all IEEE 802.15.4 MAC layer attacks known to date. While the majority of existing references investigate the motive and behavior of each attack separately, this survey addresses the interrelationships and differences between the attacks following their classification. The survey defines two main classifications for the attacks by combining and refining existing classifications of the attacks obtained from external references. The defined classifications are further extended by including additional attacks, which have been left out by other references, within the classifications. The authors' opinions and comments regarding the placement of the attacks within the defined classifications are also provided. A comparative analysis between the classified attacks is then performed with respect to a set of evaluation criteria defined within the paper.

11 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: After devising a blueprint for Big Data enhanced SIM based on the latest research, the system architecture and the resulting implementation are presented and can be adopted in organizations or Smart City contexts.
Abstract: Network Services are confronted with a growing amount and diversity of attacks. The detection of such intrusion attempts however is getting more complex. This is mainly a result of more sophisticated attacks and a consequence of the more ubiquitous and overall more complex IT ecosystem. The resulting rapidly increasing network traffic makes it extremely hard to detect and prevent attacks in traditional ways. This paper proposes Security Information Management (SIM) enhancements considering Big Data Analysis principles. In the context of Cyber- Security, the blueprint and implementation presented can be adopted in organizations or Smart City contexts. After devising a blueprint for Big Data enhanced SIM based on the latest research, the system architecture and the resulting implementation are presented. The blueprint and implementation have been field- tested in a real world SIM large scale environment and evaluated with real network security logs. Our research is timely, since the application of Big Data principles to SIM environments has been rarely investigated so far, and there exists the need for a general concept of enhancement possibilities.

11 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: A blind digital multi- watermarking algorithm for the copyright protection and authentication of medical images and results show that the robust watermark survived many intentional and non intentional attacks, while the fragile watermark is sensitive to any slight tampering on the medical images.
Abstract: In this paper, we propose a blind digital multi- watermarking algorithm for the copyright protection and authentication of medical images. When medical images are transmitted and stored in hospitals, strict security, confidentiality and integrity are required to protect from the illegal distortion and reproduction of the medical information. Medical image watermarking requires extreme care when embedding watermark information in the medical images because the additional information must not affect the image quality as this may cause the wrong diagnosis. The proposed algorithm contains one robust watermark for the ownership of the image and two fragile watermarks for checking the authenticity. The first two watermarks are embedded in the wavelet domain, while the third watermark is embedded in the spatial domain. In the proposed algorithm, the medical image is divided into two regions, called the Region of Interest and Region of Non Interest and all the three watermarks are embedded in the Region of Non-Interest. The new medical watermarking technique offers high peak signal to noise ratio and similarity structure index measure values. The technique was successfully tested on a variety of medical images and the experimental results show that the robust watermark survived many intentional and non intentional attacks, while the fragile watermark is sensitive to any slight tampering on the medical images.

11 citations


Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper introduces and discusses the modeling of a multi objective problem with consideration on the aspects that affect these models, and tries to maximize energy efficiency and packet throughput.
Abstract: Health monitoring system has been an important application in the last decade. There are many types of health sensors that make this system worth and real like wearable sensor, bed sensor and ECG sensor. Primarily, these sensors operate on the license-free 2.4-GHz industrial, scientific, and medical band (ISM). This feature makes this system not only easily applicable, but also probably vulnerable to intrusion and interference by other appliances that works on this band like Wi-Fi and microwave oven. In this paper we introduce and discuss the modeling of a multi objective problem with consideration on the aspects that affect these models. We try to maximize energy efficiency and packet throughput. The work has been tested using three evolutionary algorithms: SPEA-II, NSGA-II and OMOPSO.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: The proposed technique is based on the use of Time Difference of Arrival (TDOA) along with Particle Filter (PF) method which is demonstrated to be capable of accurate detection since each particle in the PF represents a state which translates into the moving node location in the case of TDOA localization.
Abstract: This paper presents a technique for enhanced localization of moving nodes in Wireless Sensor Networks (WSNs). The proposed technique is based on the use of Time Difference of Arrival (TDOA) along with Particle Filter (PF) method which is demonstrated to be capable of accurate detection since each particle in the PF represents a state which translates into the moving node location in the case of TDOA localization. The proposed technique outperforms other filtering methods, such as the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF), which are restricted to Gaussian distribution of the moving nodes. Numerical simulations verify the accuracy and robustness of the proposed TDOA-PF localization technique.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper surveys the existing research and focuses on characterizing and modeling the in vivo wireless channel and contrasting this channel with the other familiar channels, as well as addressing current challenges and future research areas for in vivo communications.
Abstract: The emerging in vivo communication and networking system is a prospective component in advancing health care delivery and empowering the development of new applications and services. In vivo communications construct wirelessly networked cyber-physical systems of embedded devices to allow rapid, correct and cost-effective responses under various conditions. This paper surveys the existing research which investigates the state of art of the in vivo communication. It also focuses on characterizing and modeling the in vivo wireless channel and contrasting this channel with the other familiar channels. MIMO in vivo is as well regarded in this overview since it significantly enhances the performance gain and data rates. Furthermore, this paper addresses current challenges and future research areas for in vivo communications.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This study different from the studies in literature, focuses on the performance of Hammerstein block model that Second Order Volterra (SOV) Model is preferred instead of Memoryless Polynomial Nonlinear (MPN) as nonlinear part.
Abstract: An attempt has been made in this paper to present performance analysis of a Hammerstein model for system identification area. Hammerstein model block structure is formed by cascade of linear and nonlinear parts. This study different from the studies in literature, focuses on the performance of Hammerstein block model that Second Order Volterra (SOV) Model is preferred instead of Memoryless Polynomial Nonlinear (MPN) as nonlinear part. In simulations, different systems are identified by proposed Hammerstein model which is optimized with classical and heuristic algorithms. Also, its performance is compared with different models.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: A remote assistance collaboration tool combining wearable technologies and augmented reality is presented to allow a specialist from an engineering services company to remotely assist in real time a maintenance operator located at an operation site.
Abstract: Nowadays companies are rapidly moving towards transforming their business into digital businesses. Being digital often means exploiting emerging technologies to enable new ways of serving customers. This paper presents two digital tools that have been designed to drive innovation in maintenance operations. A remote assistance collaboration tool combining wearable technologies and augmented reality is presented to allow a specialist from an engineering services company to remotely assist in real time a maintenance operator located at an operation site. A visible and open collaborative business model built to model the B2B process between the engineering services and the operation companies is presented to monitor compliance to the service level agreement contract and provide a basis for improving customer facing interactions. This paper provides an example of digital innovation in maintenance operations that integrates new technologies with business process models.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: Simulation results show that the quantized quad trees and entropy coding improved compression ratios and quality derived from the fractal image compression with range block and iterations technique.
Abstract: Fractal compression is a lossy compression method for digital images based on fractals rather than pixels, which are best suited for textures and natural images. It works on self- similarity property in various fractions of images, relying on the fact that parts of an image often resemble other parts of the same image. It takes long encoding time and affects the image quality. This paper introduces an improved model integrating quantized quad trees and entropy coding used for fractal image compression. Quantized quad tree method divides the quantized original gray level image into various blocks depending on a threshold value besides the properties of the features presented in image. Entropy coding is applied for improving the compression quality. Simulation results show that the quantized quad trees and entropy coding improved compression ratios and quality derived from the fractal image compression with range block and iterations technique. Different quantitative measures can be found by passing images of different format and dimensions.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This work describes the development of a web recommender system implementing both collaborative filtering and content-based filtering, and it supports two different working modes, either sponsored or related, depending on whether websites are to be recommended based on a list of ongoing ad campaigns or in the user preferences.
Abstract: This work describes the development of a web recommender system implementing both collaborative filtering and content-based filtering. Moreover, it supports two different working modes, either sponsored or related, depending on whether websites are to be recommended based on a list of ongoing ad campaigns or in the user preferences. Novel recommendation algorithms are proposed and implemented, which fully rely on set operations such as union and intersection in order to compute the set of recommendations to be provided to end users. The recommender system is deployed over a real-time big data architecture designed to work with Apache Hadoop ecosystem, thus supporting horizontal scalability, and is able to provide recommendations as a service by means of a RESTful API. The performance of the recommender is measured, resulting in the system being able to provide dozens of recommendations in few milliseconds in a single-node cluster setup.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This study is focused on developing models for identifying epilepsy convulsions in order to enhance the anamnesis of the patient by learning a Fuzzy Rule Based System using Ant Colony Optimization.
Abstract: This study is focused on developing models for identifying epilepsy convulsions in order to enhance the anamnesis of the patient. A 3D accelerometer built-in wearable device is placed on the dominant wrist to gather data from participants. Based on the data gathered from the sensor, a Fuzzy Rule Based System is learned. On the one hand, statistical data from a set of patients is used to set up the partition data base; on the other hand, the Fuzzy rule base is learned using Ant Colony Optimization. Results show this approach faster and easier to learn than previous research. Introducing minor changes in the fuzzy reasoning produces even more robust models. The presented study shows a valid research path for the identification of the epilepsy convulsions.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This work presents a practical and effective device-centric WiFi/cellular link aggregation mechanism that can boost the download bit rate and minimize the download delay and does extensive testing using an experimental implementation over Android smartphones to demonstrate the practical feasibility and quantify its performance gains under different operational conditions.
Abstract: One key enhancement in recent 3GPP cellular standard releases is WiFi/cellular network inter-operation and integration with protocols and techniques that can optimize target performance metrics. 3GPP specifications focus on network- centric solutions that require the involvement of the cellular operator in the WiFi/cellular connection and mobility management procedures. Even though this provides notable gains due to the centralized control, its implementation requires network upgrades and is relatively complex which limits the possibility of quick market penetration. In this work, we present a practical and effective device-centric WiFi/cellular link aggregation mechanism that can boost the download bit rate and minimize the download delay. The proposed mechanism is self-adaptive and achieves optimized aggregated download bit rate without the need for link quality estimation information, changes to the WiFi or cellular standards, or proxy server configuration. Moreover, it can be customized to support different service classes as shown for file downloading and video streaming applications. We do extensive testing using an experimental implementation over Android smartphones in order to demonstrate the practical feasibility of the proposed mechanism and to quantify its performance gains under different operational conditions.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: It is shown that internal knowledge spillovers were the most important determinant for the chemical Arms' innovation activity during the monitored period and R&D intensity, collaboration on innovation and firm size were also important determinants.
Abstract: A number of studies are concerned with the analysis of predicting innovation activity, because companies' innovation activity is one of the fundamental determinants for their competitiveness. However, most studies use a linear (logistic) regression model for their analysis. This, however, is not able to take into account all the recursive terms concerning a company's innovation activity. Therefore, in the report we demonstrate the use of ensembles of decision trees to model the intrinsic nonlinear characteristics of the innovation process. We apply this method for predicting innovation activity to chemical companies. We show that internal knowledge spillovers were the most important determinant for the chemical Arms' innovation activity during the monitored period. Furthermore, R&D intensity, collaboration on innovation and firm size were also important determinants.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: The capabilities of keylogger to capture keystrokes from an on-screen (virtual) keyboard are investigated and it is demonstrated that different keyloggers respond very differently to on- screen keyboard input.
Abstract: Software keyloggers have been used to spy on computer users to track activity or gather sensitive information for decades. Their primary focus has been to capture keystroke data from physical keyboards. However, since the release of Microsoft Windows 8 in 2012 touchscreen personal computers have become much more prevalent, introducing the use of on-screen keyboards which afford users an alternative keystroke input method. Smart cities are designed to enhance and improve the quality of life of city populations while reducing cost and resource consumption. As new technology is developed to create safe, renewable, and sustainable environments, we introduce additional risk that mission critical data and access credentials may be stolen via malicious keyloggers. In turn, cyber-attacks targeting critical infrastructure using this data could result in widespread catastrophic systems failure. In order to protect society in the age of smart-cities it is vital that security implications are considered as this technology is implemented. In this paper we investigate the capabilities of keyloggers to capture keystrokes from an on-screen (virtual) keyboard and demonstrate that different keyloggers respond very differently to on-screen keyboard input. We suggest a number of future studies that could be performed to further understand the security implications presented by on-screen keyboards to smart cities as they relate to keyloggers.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This work proposes a semantic multimodal architecture for adapting multimedia documents based on a distributed implementation of W3C's Multimodal Architecture and Interfaces applied to ubiquitous computing to offer a flexible way to users to satisfy their preferences and to make content more accessible and interactive.
Abstract: Multimedia documents can be accessed at anytime and anywhere on a wide variety of devices, such as laptops, tablets and smartphones. The heterogeneity of devices and user preferences has raised a serious issue for multimedia contents adaptation. Our research focuses on multimedia documents adaptation with a strong focus on interaction with users and exploration of multimodality. We propose a semantic multimodal architecture for adapting multimedia documents based on a distributed implementation of W3C's Multimodal Architecture and Interfaces applied to ubiquitous computing. The proposed architecture has the great advantage to offer a flexible way to users to satisfy their preferences and to make content more accessible and interactive.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: This paper presents a novel cloud based access control criteria and has a novel list of properties and factors that can be utilized for assessing and evaluating access control systems in cloud computing environments.
Abstract: An access control system is one of the fundamental security requirements of cloud computing in order to avoid unauthorized access to systems and infiltrate organizational assets. Although, various access control models and policies have been developed such as Mandatory Access Control (MAC) and Role Based Access Control (RBAC) for different environments, these models may not fulfil cloud's access control requirements. This is because cloud computing has a diverse set of users with different sets of security requirements. It also has unique security challenges such as multi-tenant hosting and heterogeneity of security policies, rules and domains. This paper illustrates the basic concepts of access control models and cloud computing. It reviews access control systems primitives and their methodologies. It presents a novel cloud based access control criteria. It has a novel list of properties and factors that can be utilized for assessing and evaluating access control systems in cloud computing environments.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: The main objective of this system is to significantly reduce the temperature inside the parked vehicle without starting the engine and using vehicle's own A/C system, and can be significantly boosted by the increased power and improved heat exchange.
Abstract: This paper presents the design and implementation of a novel, low-cost solar powered car cooling system for idle parked cars. This system is targeted towards Gulf countries such as the United Arab Emirates as high ambient temperature changes may cause interior damage to the car and pose a significant burn threat for young, disabled or elderly passengers. The main objective of this system is to significantly reduce the temperature inside the parked vehicle without starting the engine and using vehicle's own A/C system. The proposed cooling prototype utilizes thermoelectric element to cool the air inside the car and mirco fan system to accelerate the heat exchange. The system is powered by the external battery and recharged by the top-mounted solar panels. The system's connectivity is leveraged by Wi-Fi technology operated via mobile phones that enable remote monitoring and control of the temperature inside the car. The initial tests carried out on the car model indicate modest cooling effects in the range of 4° C, but can be significantly boosted by the increased power and improved heat exchange.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: A novel assessment index based on entropy method is proposed by considering these two indexes together to evaluate the DCT-HMM system comprehensively and finding the overlap between consecutive blocks can be optimized by yielding the best assessment index value.
Abstract: The Hidden Markov Model trained by Discrete Cosine Transform (DCT-HMM) is a very established method for face recognition. However, traditional ways to judge whether the model is a good model is usually one-sided. In Computation time or error rate, researchers usually consider one of the following: (1) to reduce the error rate or (2) to save the computation time. This paper proposes a novel assessment index based on entropy method by considering these two indexes together to evaluate the DCT-HMM system comprehensively. Also, since the block sampling part is important in the process of DCT-HMM, the overlap between consecutive blocks can be optimized by yielding the best assessment index value.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: It's observed, how Smart devices coupled with smart big data analytics enables insightful actions and is the need of the hour to help us reduce operational redundancy and promote automation in the envisioned smart environments.
Abstract: An exponential growth in the availability of data from various sources has enabled large scale adoption of data driven decision making. Much of present day's data was generated in the recent years, complementing to this there has been a substantial reduction in data storage costs. Hence the analysis of data collected will assist decision making in our future smart environments. Here sustainability refers to the large scale, robust data infrastructures for future organizations and cities that provide the basis for insightful actions w.r.t defined goals. New age databases can handle both structured and unstructured data with ease. Pooling this with statistical analytical tools promises to be a perfect recipe to cater the needs of decision making and to promote sustained self-sufficiency in smart environments. Actionable knowledge thus obtained can lead to improved understanding of human interaction with the systems driving smart environments. Our research analyses some of the technologies with which we can obtain meaningful actionable insights using the data captured from various underlying systems in a smart environment. A combination of SAP HANA, Hadoop, and R were deployed in our test bed. It's observed, how Smart devices coupled with smart big data analytics enables insightful actions and is the need of the hour to help us reduce operational redundancy and promote automation in the envisioned smart environments.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: Based on the in-depth performance evaluation, it is concluded that PiMail is fully capable of providing email services to individuals and SMEs (small and medium-sized enterprises).
Abstract: Third-party Email services, whether a free webmail or corporate service hosted somewhere over the Internet, requires sacrifice of control and flexibility over personal communication. This make emails vulnerable to privacy leaks due to unauthorized access and inspection. In this paper, we propose PiMail, an affordable, lightweight and energy- efficient private email infrastructure, for individuals and small enterprises, based on Raspberry Pi. We also deployed a testbed implementation of PiMail using Raspbian OS, Postfix mail transfer agent (MTA), ClamAV antivirus and SpamAssassin. Based on our in-depth performance evaluation we conclude that PiMail is fully capable of providing email services to individuals and SMEs (small and medium-sized enterprises). To the best of our knowledge, this is the first extensive study to evaluate and benchmark the efficacy of using Raspberry Pi as low-cost and portable mail server.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: A comprehensive smart unified solution for people with disabilities at work place is developed that provides multiple input and output methods, so that various kinds of impaired persons can be assisted with a single solution.
Abstract: Advancements in human computer interaction (HCI), and information and communication technology (ICT) resulted in developing of various assistive technologies especially for People with Disabilities (PWD). There are many assistive technologies and applications available these days. However, most of the available solutions focus on a particular type of impairment, and very few of these solutions focused on work place environment. Therefore, in this research a comprehensive smart unified solution for people with disabilities at work place is developed. The developed solution provides multiple input and output methods, so that various kinds of impaired persons can be assisted with a single solution. The system offers essential features for getting help information from the environment. It helps impaired people to communicate with others smartly, with the help of flexible interface that is adaptable to meet their requirements. This solution supports wide-spectra of PWD groups with various combinations of disabilities. Users will have the ability to manage their system interface using multi-model ways of interactions and commanding through use of speech recognition engine, text-to-speech, Mic, and mouse/keyboard.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: By combining two categorization models, HFACS and ITSM processes, in a two dimensional matrix, it was found that most of the error pathways were related to change planning, which is needed to prevent reoccurrence of incidents.
Abstract: Incident trend analysis is a practice where historical incident records and reports are investigated in order to identify patterns or trends in incident root causes. It is needed to prevent reoccurrence of incidents in order to improve the quality of service in ICT infrastructures. The research problem of this study is how quality of service could be improved by analysis of incidents with root causes related to human errors. Human error related root causes of IT service incidents are classified with the Human Factors Analysis and Classification System (HFACS) and also according to IT service management processes. 18 incident reports containing enough information to track the contributing conditions behind the harmful human act were investigated. As a result, three major types of sequences of events were identified. These event sequences describe error pathways leading to the active failure. By combining two categorization models, HFACS and ITSM processes, in a two dimensional matrix, we found that that most of the error pathways were related to change planning.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: An aqua agent-based model is built that simulate the contiguous disease transmission as a result of interactions between fish, pathogens and their environment in time-space context and can be applied to different applications due to its ability to show the disease progression based on the individuals' interactions.
Abstract: Disease in fish populations is a dynamic phenomenon; oscillations in occurrence and impact are dependent on the interactions among fish (host), pathogen, and environment. While most of the previous models to simulate disease dynamics are based on the assumption that populations are homogeneous, we build an aqua agent-based model that simulate the contiguous disease transmission as a result of interactions between fish, pathogens and their environment in time-space context. This heterogeneous model gives a realistic representation of the system and tackle naturally stochastic nature of the infectious process. The model combines most important factors in the fish disease process, environmental factors, fish swimming behavior and infection process parameters. The simulation experiments are designed to explore the impact of sea currents and swimming behavior on the disease dynamics. The results show that the attack rate increases when the sea current speed decreases, and when the fish swim in a regular pattern (circular or in school). The model can be applied to different applications due to its ability to show the disease progression based on the individuals' interactions. This can help in understanding of disease-spread dynamics and yield to take better steps towards the prevention and control of a disease outbreak.