scispace - formally typeset
Search or ask a question

Showing papers in "Human-centric Computing and Information Sciences in 2018"


Journal ArticleDOI
TL;DR: A systematic and detailed survey of the malware detection mechanisms using data mining techniques and classifies the malware Detection approaches in two main categories including signature-based methods and behavior-based detection.
Abstract: Data mining techniques have been concentrated for malware detection in the recent decade. The battle between security analyzers and malware scholars is everlasting as innovation grows. The proposed methodologies are not adequate while evolutionary and complex nature of malware is changing quickly and therefore turn out to be harder to recognize. This paper presents a systematic and detailed survey of the malware detection mechanisms using data mining techniques. In addition, it classifies the malware detection approaches in two main categories including signature-based methods and behavior-based detection. The main contributions of this paper are: (1) providing a summary of the current challenges related to the malware detection approaches in data mining, (2) presenting a systematic and categorized overview of the current approaches to machine learning mechanisms, (3) exploring the structure of the significant methods in the malware detection approach and (4) discussing the important factors of classification malware approaches in the data mining. The detection approaches have been compared with each other according to their importance factors. The advantages and disadvantages of them were discussed in terms of data mining models, their evaluation method and their proficiency. This survey helps researchers to have a general comprehension of the malware detection field and for specialists to do consequent examinations.

272 citations


Journal ArticleDOI
TL;DR: An energy-efficient sleep scheduling mechanism with similarity measure for wireless sensor networks (ESSM) is proposed, which will schedule the sensors into the active or sleep mode to reduce energy consumption effectively.
Abstract: In wireless sensor networks, the high density of node’s distribution will result in transmission collision and energy dissipation of redundant data. To resolve the above problems, an energy-efficient sleep scheduling mechanism with similarity measure for wireless sensor networks (ESSM) is proposed, which will schedule the sensors into the active or sleep mode to reduce energy consumption effectively. Firstly, the optimal competition radius is estimated to organize the all sensor nodes into several clusters to balance energy consumption. Secondly, according to the data collected by member nodes, a fuzzy matrix can be obtained to measure the similarity degree, and the correlation function based on fuzzy theory can be defined to divide the sensor nodes into different categories. Next, the redundant nodes will be selected to put into sleep state in the next round under the premise of ensuring the data integrity of the whole network. Simulations and results show that our method can achieve better performances both in proper distribution of clusters and improving the energy efficiency of the networks with prerequisite of guaranteeing the data accuracy.

95 citations


Journal ArticleDOI
TL;DR: A ‘Reference Roadmap’ of reliability and high availability in cloud computing environments was presented and a big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?‚ which?, ‘When?' and 'How?'
Abstract: Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.

90 citations


Journal ArticleDOI
TL;DR: The study will surveyed the recent trends of mobile Fintech payment services and categorized them based on the service forms to suggest requirements and security challenges so that better and securer service can be provided in the future.
Abstract: Due to recent developments in IT technology, various Fintech technologies composing of finance and technology are being developed. Especially, because of rapidly growing online market and supply of mobile devices, the need for mobile Fintech payment service that enables easy online and off-line payment has increased. According to the 2013 report by market research company Gartner, purchase related global mobile payment market size was predicted to grow from $45.1 billion in 2012 to $224.3 billion in 2017 with average annual growth of 38%. The study will surveyed the recent trends of mobile Fintech payment services and categorized them based on the service forms to suggest requirements and security challenges so that better and securer service can be provided in the future. First, the study defined existing payment services and Fintech payment services by comparing them, and analyzed recent mobile Fintech payment services to classify mobile Fintech payment service providers into Hardware makers, Operating System makers, payment platform providers, and financial institutions to show their common features. Finally it defined requirements that mobile Fintech payment services must meet and security challenges that future and present mobile Fintech payment services will encounter in the perspective of mutual authentication, authorization, integrity, privacy, and availability. Through the suggested study, it is expected that mobile Fintech payment services will develop into more secure services in the future.

88 citations


Journal ArticleDOI
Song Zhou1, Sheng Xiao1
TL;DR: The history and the most recent progresses in 3D face recognition research domain are summarized, and the frontier research results are introduced in three categories: pose-invariant recognition, expression- Invariant Recognition, and occlusion-invarant recognition.
Abstract: 3D face recognition has become a trending research direction in both industry and academia. It inherits advantages from traditional 2D face recognition, such as the natural recognition process and a wide range of applications. Moreover, 3D face recognition systems could accurately recognize human faces even under dim lights and with variant facial positions and expressions, in such conditions 2D face recognition systems would have immense difficulty to operate. This paper summarizes the history and the most recent progresses in 3D face recognition research domain. The frontier research results are introduced in three categories: pose-invariant recognition, expression-invariant recognition, and occlusion-invariant recognition. To promote future research, this paper collects information about publicly available 3D face databases. This paper also lists important open problems.

81 citations


Journal ArticleDOI
TL;DR: The Secure Authentication Management human-centric Scheme (SAMS) is proposed to authenticate mobile devices using blockchain for trusting resource information in the mobile devices that are participating in the MRM resource pool.
Abstract: The recent advances in information technology for mobile devices have increased the work efficiency of users, the mobility of compact mobile devices, and the convenience of location independence. However, mobile devices have limited computing power and storage capacity, so mobile cloud computing is being researched to overcome these limitations in mobile devices. Mobile cloud computing is divided into two methods: the use of external cloud services and the use of mobile resource management without a cloud server (MRM), which integrates the computing and storage resources of nearby mobile devices. Because mobile devices can freely participate in MRM, it is critical to have authentication technology to determine the correctness of information regarding resources. Conventional technologies require strong authentication techniques because they have vulnerabilities that can easily be tampered with via man-in-the-middle (MITM) attacks. This paper proposes the Secure Authentication Management human-centric Scheme (SAMS) to authenticate mobile devices using blockchain for trusting resource information in the mobile devices that are participating in the MRM resource pool. The SAMS forms a blockchain based on the resource information of the subordinate client nodes around the master node in the MRM. Devices in the MRM that have not been authorized through the SAMS cannot access or falsify data. To verify the SAMS for application with MRM, it was tested for data falsification by a malicious user accessing the SAMS, and the results show that data falsification is impossible.

66 citations


Journal ArticleDOI
TL;DR: A hypothesis that users by similar personality are expected to display mutual behavioral patterns when cooperating through social networks is presented, which was more accurate than previous studies that were able to foresee personality according to the variables in their profiles in five factors.
Abstract: Online social networks have become demanded ways for users to show themselves and connect and share information with each other among these social networks. Facebook is the most popular social network. Personality recognition is one of the new challenges between investigators in social networks. This paper presents a hypothesis that users by similar personality are expected to display mutual behavioral patterns when cooperating through social networks. With the goal of personality recognition in terms of analyzing user activity within Facebook, we collected information about the personality traits of users and their profiles on Facebook, hence we flourished an application using API Facebook. The participants of this study are 100 volunteers of Facebook users. We asked the participants to respond the NEO personality questionnaire in a period of 1 month in May 2012. At the end of this questionnaire, there was a link that asked the participants to permit the application to access their profiles. Based on all the collected data, classifiers were learned using different data mining techniques to recognize user personality by their profile and without filling out any questionnaire. With comparing classifiers’ results, the boosting-decision tree was our proposed model with 82.2% accuracy was more accurate than previous studies that were able to foresee personality according to the variables in their profiles in five factors for using it as a model for recognizing personality.

65 citations


Journal ArticleDOI
TL;DR: The feasibility of this inter-operable Rkt container in high performance applications is explored by running the HPL and Graph500 applications and its performance is compared with the commonly used container technologies such as LXC and Docker containers.
Abstract: Cloud computing is the driving power behind the current technological era. Virtualization is rightly referred to as the backbone of cloud computing. Impacts of virtualization employed in high performance computing (HPC) has been much reviewed by researchers. The overhead in the virtualization layer was one of the reasons which hindered its application in the HPC environment. Recent developments in virtualization, especially the OS container based virtualization provides a solution that employs a lightweight virtualization layer and promises lesser overhead. Containers are advantageous over virtual machines in terms of performance overhead which is a major concern in the case of both data intensive applications and compute intensive applications. Currently, several industries have adopted container technologies such as Docker. While Docker is widely used, it has certain pitfalls such as security issues. The recently introduced CoreOS Rkt container technology overcomes these shortcomings of Docker. There has not been much research on how the Rkt environment is suited for high performance applications. The differences in the stack of the Rkt containers suggest better support for high performance applications. High performance applications consist of CPU-intensive and data-intensive applications. The High Performance Linpack Library and the Graph500 are the commonly used computation intensive and data-intensive benchmark applications respectively. In this work, we explore the feasibility of this inter-operable Rkt container in high performance applications by running the HPL and Graph500 applications and compare its performance with the commonly used container technologies such as LXC and Docker containers.

56 citations


Journal ArticleDOI
TL;DR: A user-centric framework based on four perspectives based on socio-psychological, habitual, socio-emotional, and perceptual perspectives is proposed and validates for a more cohesive understanding of user’s susceptibility to social engineering.
Abstract: Social engineering is a growing source of information security concern. Exploits appear to evolve, with increasing levels of sophistication, in order to target multiple victims. Despite increased concern with this risk, there has been little research activity focused upon social engineering in the potentially rich hunting ground of social networks. In this setting, factors that influence users' proficiency in threat detection need to be understood if we are to build a profile of susceptible users, develop suitable advice and training programs, and generally help address this issue for those individuals most likely to become targets of social engineering in social networks. To this end, the present study proposes and validates a user-centric framework based on four perspectives: socio-psychological, habitual, socio-emotional, and perceptual. Previous research tends to rely on selected aspects of these perspectives and has not combined them into a single model for a more cohesive understanding of user's susceptibility.

55 citations


Journal ArticleDOI
TL;DR: Performance evaluation shows that MH-GEER minimizes energy depletion in distant clusters and ensures load balancing in a network, thus improving the network’s lifetime and stability compared with single-hop conventional LEACH protocol.
Abstract: Emerging technological advances in wireless communication and networking have led to the design of large scale networks and small sensor units with minimal power requirements and multifunctional processing. Though energy harvesting technologies are improving, the energy of sensors remains a scarce resource when designing routing protocols between sensor nodes and base station. This paper proposes a multi-hop graph-based approach for an energy-efficient routing (MH-GEER) protocol in wireless sensor networks which aims to distribute energy consumption between clusters at a balanced rate and thus extend networks’ lifespans. MH-GEER deals with node clustering and inter-cluster multi-hop routing selection. The clustering phase is built upon the centralized formation of clusters and the distributed selection of cluster heads similar to that of low-energy adaptive clustering hierarchy (LEACH). The routing phase builds a dynamic multi-hop path between cluster heads and the base station. Our strategy is about exploring the energy levels in the entire network and using these to select the next hop in a probabilistic, intelligent way. Performance evaluation shows that MH-GEER minimizes energy depletion in distant clusters and ensures load balancing in a network, thus improving the network’s lifetime and stability compared with single-hop conventional LEACH protocol.

48 citations


Journal ArticleDOI
TL;DR: These three intelligent models allow the IoT devices in a smart home to mutually cooperate with each other and can resolve the problems of network congestion and energy wastage by reducing unnecessary network tasks to systematically use energy according to the IoT usage patterns in the smart home.
Abstract: Smart home and IoT-related technologies are developing rapidly, and various smart devices are being developed to help users enjoy a more comfortable lifestyle. However, the existing smart homes are limited by a scarcity of operating systems to integrate the devices that constitute the smart home environment. This is because these devices use independent IoT platforms developed by the brand or company that developed the device, and they produce these devices based on self-service modules. A smart home that lacks an integrated operating system becomes an organizational hassle because the user must then manage each device individually. Furthermore, this leads to problems such as excessive traffic on the smart home network and energy wastage. To overcome these problems, it is necessary to build an integrated management system that connects IoT devices to each other. To efficiently manage IoT, we propose three intelligent models as IoT platform application services for a smart home. The three models are intelligence awareness target as a service (IAT), intelligence energy efficiency as a service (IE2S), and intelligence service TAS (IST). IAT manages the "things" stage. IAT uses intelligent learning to acquire a situational awareness of the data values generated by things (sensors) to collect data according to the environment. IE2S performs the role of a server (IoT platform) and processes the data collected by IAT. The server uses Mobius, which is an open-source platform that follows international standards, and an artificial TensorFlow engine is used for data learning. IE2S analyzes and learns the users' usage patterns to help provide service automatically. IST helps to provide, control, and manage the service stage. These three intelligent models allow the IoT devices in a smart home to mutually cooperate with each other. In addition, these intelligent models can resolve the problems of network congestion and energy wastage by reducing unnecessary network tasks to systematically use energy according to the IoT usage patterns in the smart home.

Journal ArticleDOI
TL;DR: This study proposes a new feature selection method, called query expansion ranking, which is based on query expansion term weighting methods from the field of information retrieval, and achieves consistently better performance than other feature selection methods.
Abstract: Sentiment analysis is about the classification of sentiments expressed in review documents. In order to improve the classification accuracy, feature selection methods are often used to rank features so that non-informative and noisy features with low ranks can be removed. In this study, we propose a new feature selection method, called query expansion ranking, which is based on query expansion term weighting methods from the field of information retrieval. We compare our proposed method with other widely used feature selection methods, including Chi square, information gain, document frequency difference, and optimal orthogonal centroid, using four classifiers: naive Bayes multinomial, support vector machines, maximum entropy modelling, and decision trees. We test them on movie and multiple kinds of product reviews for both Turkish and English languages so that we can show their performances for different domains, languages, and classifiers. We observe that our proposed method achieves consistently better performance than other feature selection methods, and query expansion ranking, Chi square, information gain, document frequency difference methods tend to produce better results for both the English and Turkish reviews when tested using naive Bayes multinomial classifier.

Journal ArticleDOI
TL;DR: Examination of how engaging users in the early design phases of a software application is tightly bound to the success of that application in use reveals how sensitivity to the role that users may play during that collaborative practice rebounds to a good level of user satisfaction during the evaluation process.
Abstract: Drawing on a 1-year application design, implementation and evaluation experience, this paper examines how engaging users in the early design phases of a software application is tightly bound to the success of that application in use. Through the comparison between two different approaches to collaborative application design (namely, user-centered vs participatory), we reveal how sensitivity to the role that users may play during that collaborative practice rebounds to a good level of user satisfaction during the evaluation process. Our paper also contributes to conversations and reflections on the differences between those two design approaches, while providing evidences that the participatory approach may better sensitize designers to issues of users' satisfaction. We finally offer our study as a resource and a methodology for recognizing and understanding the role of active users during a process of development of a software application.

Journal ArticleDOI
TL;DR: A novel modified Chi Square-based feature clustering and weighting scheme is proposed for the sentiment analysis of twitter message, which significantly outperforms four existing representative sentiment analysis schemes in terms of the accuracy regardless of the size of training and test data.
Abstract: With rapid growth of social networking service on Internet, huge amount of information are continuously generated in real time. As a result, sentiment analysis of online reviews and messages has become a popular research issue [1]. In this paper a novel modified Chi Square-based feature clustering and weighting scheme is proposed for the sentiment analysis of twitter message. Along with the part of speech tagging, the discriminability and dependency of the words in the tagged training dataset are taken into account in the clustering and weighting process. The multinomial Naive Bayes model is also employed to handle redundant features, and the influence of emotional words is raised for maximizing the accuracy. Computer simulation with Sentiment 140 workload shows that the proposed scheme significantly outperforms four existing representative sentiment analysis schemes in terms of the accuracy regardless of the size of training and test data.

Journal ArticleDOI
TL;DR: An effective facial expression recognition system for classifying six or seven basic expressions accurately via majority voting of the three CNNs’ results is reported, with the added benefit of low latency for inference.
Abstract: In this paper, we report an effective facial expression recognition system for classifying six or seven basic expressions accurately. Instead of using the whole face region, we define three kinds of active regions, i.e., left eye regions, right eye regions and mouth regions. We propose a method to search optimized active regions from the three kinds of active regions. A Convolutional Neural Network (CNN) is trained for each kind of optimized active regions to extract features and classify expressions. In order to get representable features, histogram equalization, rotation correction and spatial normalization are carried out on the expression images. A decision-level fusion method is applied, by which the final result of expression recognition is obtained via majority voting of the three CNNs’ results. Experiments on both independent databases and fused database are carried out to evaluate the performance of the proposed system. Our novel method achieves higher accuracy compared to previous literature, with the added benefit of low latency for inference.

Journal ArticleDOI
TL;DR: An efficient hybrid cloud architecture framework coupled with Li-Fi communication for a human-centric IoT network is proposed and the architecture of the local cloud is introduced to reduce the latency delay and bandwidth cost and to improve efficiency, security, reliability and availability.
Abstract: In the new era of the Internet of Things (IoT), all information related to the environment, things and humans is connected to networks. Humans, too, can be considered an integral part of the IoT ecosystem. The growing human-centricity of IoT applications raises the need greater dynamicity, heterogeneity, and scalability in future IoT systems. Recently, the IoT and cloud computing have both evolved as emerging technologies and have already become part of our daily life. The complementary features of the IoT and cloud are forming a new IT paradigm to meet current and future requirements. Due to the increased demand for and volume of IoT data, it has become a critical challenge to transfer data from the edge of the network to computing data centers due to the limitations of network bandwidth and higher latency delay. The emergence of the new paradigm of computing in the cloud computing architecture has made it necessary to overcome the inherent limitations of cloud computing, such as location awareness, scalability, energy efficiency, mobility, bandwidth bottlenecks, and latency delay. To address these issues, this paper proposes an efficient hybrid cloud architecture framework coupled with Li-Fi communication for a human-centric IoT network. It also introduces the architecture of the local cloud to reduce the latency delay and bandwidth cost and to improve efficiency, security, reliability and availability. Finally, the paper discusses the communication modulation schemes in the Li-Fi technique and presents scenarios involving the application of the proposed model in the real world.

Journal ArticleDOI
TL;DR: A user-elicitation study to identify natural interactions for 3D manipulation using dual-hand controllers, which have become the standard input devices for VR HMDs, suggests that users prefer interactions that are based on shoulder motions and elbow flexion movements.
Abstract: Virtual reality technologies (VR) have advanced rapidly in the last few years. Prime examples include the Oculus RIFT and HTC Vive that are both head-worn/mounted displays (HMDs). VR HMDs enable a sense of immersion and allow enhanced natural interaction experiences with 3D objects. In this research we explore suitable interactions for manipulating 3D objects when users are wearing a VR HMD. In particular, this research focuses on a user-elicitation study to identify natural interactions for 3D manipulation using dual-hand controllers, which have become the standard input devices for VR HMDs. A user elicitation study requires potential users to provide interactions that are natural and intuitive based on given scenarios. The results of our study suggest that users prefer interactions that are based on shoulder motions (e.g., shoulder abduction and shoulder horizontal abduction) and elbow flexion movements. In addition, users seem to prefer one-hand interaction, and when two hands are required they prefer interactions that do not require simultaneous hand movements, but instead interactions that allow them to alternate between their hands. Results of our study are applicable to the design of dual-hand interactions with 3D objects in a variety of virtual reality environments.

Journal ArticleDOI
TL;DR: To further improve accuracy in the recommendation system, the k-clique methodology used to analyze social networks is presented to be the guidance of this system and results show that the proposed methods improve more accuracy of the movie recommendation system than any other methods used in this experiment.
Abstract: The amount of movie has increased to become more congested; therefore, to find a movie what users are looking for through the existing technologies are very hard. For this reason, the users want a system that can suggest the movie requirement to them and the best technology about these is the recommendation system. However, the most recommendation system is using collaborative filtering methods to predict the needs of the user due to this method gives the most accurate prediction. Today, many researchers are paid attention to develop several methods to improve accuracy rather than using collaborative filtering methods. Hence, to further improve accuracy in the recommendation system, we present the k-clique methodology used to analyze social networks to be the guidance of this system. In this paper, we propose an efficient movie recommendation algorithm based on improved k-clique methods which are the best accuracy of the recommendation system. However, to evaluate the performance; collaborative filtering methods are monitored using the k nearest neighbors, the maximal clique methods, the k-clique methods, and the proposed methods are used to evaluate the MovieLens data. The performance results show that the proposed methods improve more accuracy of the movie recommendation system than any other methods used in this experiment.

Journal ArticleDOI
TL;DR: Among the studied stacked ensemble methods, performing voting strategy yields comparable performance as the other methods, but with much lower computational cost.
Abstract: This paper investigates various structures of neural network models and various types of stacked ensembles for singing voice detection. The studied models include convolutional neural networks (CNN), long short term memory (LSTM) model, convolutional LSTM model, and capsule net. The input features to the network models are MFCC (mel-frequency cepstrum coefficients), spectrogram from short-time Fourier transformation, or raw PCM samples. The simulation results show that CNN model with spectrogram inputs yields higher detection accuracy, up to 91.8% for Jamendo dataset. Among the studied stacked ensemble methods, performing voting strategy yields comparable performance as the other methods, but with much lower computational cost. By voting with five models, the accuracy reaches 94.2% for Jamendo dataset.

Journal ArticleDOI
TL;DR: A parsimonious clustering pipeline that provides comparable performance to deep learning-based clustering methods, but without using deep learning algorithms, such as autoencoders is presented.
Abstract: To provide a parsimonious clustering pipeline that provides comparable performance to deep learning-based clustering methods, but without using deep learning algorithms, such as autoencoders. Clustering was performed on six benchmark datasets, consisting of five image datasets used in object, face, digit recognition tasks (COIL20, COIL100, CMU-PIE, USPS, and MNIST) and one text document dataset (REUTERS-10K) used in topic recognition. K-means, spectral clustering, Graph Regularized Non-negative Matrix Factorization, and K-means with principal components analysis algorithms were used for clustering. For each clustering algorithm, blind source separation (BSS) using Independent Component Analysis (ICA) was applied. Unsupervised feature learning (UFL) using reconstruction cost ICA (RICA) and sparse filtering (SFT) was also performed for feature extraction prior to the cluster algorithms. Clustering performance was assessed using the normalized mutual information and unsupervised clustering accuracy metrics. Performing, ICA BSS after the initial matrix factorization step provided the maximum clustering performance in four out of six datasets (COIL100, CMU-PIE, MNIST, and REUTERS-10K). Applying UFL as an initial processing component helped to provide the maximum performance in three out of six datasets (USPS, COIL20, and COIL100). Compared to state-of-the-art non-deep learning clustering methods, ICA BSS and/or UFL with graph-based clustering algorithms outperformed all other methods. With respect to deep learning-based clustering algorithms, the new methodology presented here obtained the following rankings: COIL20, 2nd out of 5; COIL100, 2nd out of 5; CMU-PIE, 2nd out of 5; USPS, 3rd out of 9; MNIST, 8th out of 15; and REUTERS-10K, 4th out of 5. By using only ICA BSS and UFL using RICA and SFT, clustering accuracy that is better or on par with many deep learning-based clustering algorithms was achieved. For instance, by applying ICA BSS to spectral clustering on the MNIST dataset, we obtained an accuracy of 0.882. This is better than the well-known Deep Embedded Clustering algorithm that had obtained an accuracy of 0.818 using stacked denoising autoencoders in its model. Using the new clustering pipeline presented here, effective clustering performance can be obtained without employing deep clustering algorithms and their accompanying hyper-parameter tuning procedure.

Journal ArticleDOI
TL;DR: An object feature extraction and classification system that uses LiDAR point clouds to classify 3D objects in urban environments and indicates that the object recognition accuracy achieve 91.5% in outdoor environment is developed.
Abstract: Due to object recognition accuracy limitations, unmanned ground vehicles (UGVs) must perceive their environments for local path planning and object avoidance. To gather high-precision information about the UGV’s surroundings, Light Detection and Ranging (LiDAR) is frequently used to collect large-scale point clouds. However, the complex spatial features of these clouds, such as being unstructured, diffuse, and disordered, make it difficult to segment and recognize individual objects. This paper therefore develops an object feature extraction and classification system that uses LiDAR point clouds to classify 3D objects in urban environments. After eliminating the ground points via a height threshold method, this describes the 3D objects in terms of their geometrical features, namely their volume, density, and eigenvalues. A back-propagation neural network (BPNN) model is trained (over the course of many iterations) to use these extracted features to classify objects into five types. During the training period, the parameters in each layer of the BPNN model are continually changed and modified via back-propagation using a non-linear sigmoid function. In the system, the object segmentation process supports obstacle detection for autonomous driving, and the object recognition method provides an environment perception function for terrain modeling. Our experimental results indicate that the object recognition accuracy achieve 91.5% in outdoor environment.

Journal ArticleDOI
TL;DR: This paper aims to survey various SNA approaches using soft computing techniques such as fuzzy logic, formal concept analysis, rough sets theory and soft set theory.
Abstract: The characteristics of the massive social media data, diverse mobile sensing devices as well as the highly complex and dynamic user’s social behavioral patterns have led to the generation of huge amounts of high dimension, uncertain, imprecision and noisy data from social networks. Thanks to the emerging soft computing techniques which unlike the conventional hard computing. It is widely used for coping with the tolerant of imprecision, uncertainty, partial truth, and approximation. One of the most important and promising applications is social network analysis (SNA) that is the process of investigating social structures and relevant properties through the use of network and graph theories. This paper aims to survey various SNA approaches using soft computing techniques such as fuzzy logic, formal concept analysis, rough sets theory and soft set theory. In addition, the relevant software packages about SNA are clearly summarized.

Journal ArticleDOI
TL;DR: A user-centered design approach has been adopted wherein semi-structured interviews with VI individuals in the local context were conducted to understand their micro-navigation practices, challenges and needs and the resulting system design along with a detailed description of its obstacle detection and unique multimodal feedback generation modules has been provided.
Abstract: The development of a novel depth-data based real-time obstacle detection and avoidance application for visually impaired (VI) individuals to assist them in navigating independently in indoors environments is presented in this paper. The application utilizes a mainstream, computationally efficient mobile device as the development platform in order to create a solution which not only is aesthetically appealing, cost-effective, lightweight and portable but also provides real-time performance and freedom from network connectivity constraints. To alleviate usability problems, a user-centered design approach has been adopted wherein semi-structured interviews with VI individuals in the local context were conducted to understand their micro-navigation practices, challenges and needs. The invaluable insights gained from these interviews have not only informed the design of our system but would also benefit other researchers developing similar applications. The resulting system design along with a detailed description of its obstacle detection and unique multimodal feedback generation modules has been provided. We plan to iteratively develop and test the initial prototype of the system with the end users to resolve any usability issues and better adapt it to their needs.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive survey on the applications of signal processing techniques in smart grids, plus the challenges and shortcomings of these techniques.
Abstract: Smart grid is an emerging research field of the current decade. The distinguished features of the smart grid are monitoring capability with data integration, advanced analysis to support system control, enhanced power security and effective communication to meet the power demand. Efficient energy consumption and minimum costs are also included in the prodigious features of smart grid. The smart grid implementation requires intelligent interaction between the power generating and consuming devices that can be achieved by installing devices capable of processing data and communicating it to various parts of the grid. The efficiency of these devices is greatly dependent on the selection and implementation of the advance digital signal processing techniques. This paper provides a comprehensive survey on the applications of signal processing techniques in smart grids, plus the challenges and shortcomings of these techniques. Furthermore, this paper also outlines some future research directions related to applications of signal processing in smart grids.

Journal ArticleDOI
TL;DR: It is argued that to design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions.
Abstract: The rapidly increasing pervasiveness and integration of computers in human society calls for a broad discipline under which this development can be studied. We argue that to design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions. We call this wider discipline Behavioural Computer Science (BCS), and argue in this paper for why BCS models should unify (models of) the behaviour of humans and machines when designing information and communication technology systems. Thus, one main point to be addressed is the incorporation of empirical evidence for actual human behaviour, instead of making inferences about behaviour based on the rational agent model. Empirical studies can be one effective way to constantly update the behavioural models. We are motivated by the future advancements in artificial intelligence which will give machines capabilities that from many perspectives will be indistinguishable from those of humans. Such machine behaviour would be studied using BCS models, looking at questions about machine trust like "Can a self driving car trust its passengers?", or artificial influence like "Can the user interface adapt to the user's behaviour, and thus influence this behaviour?". We provide a few directions for approaching BCS, focusing on modelling of human and machine behaviour, as well as their interaction.

Journal ArticleDOI
TL;DR: Experimental study has shown that feature selection and sampling frequency play dominant roles in reducing privacy leakage with much less reduction on utility, and the proposed visualization tool can effectively recommend the appropriate combination of features and sampling rates that can help users make decision on the trade-off between utility and privacy.
Abstract: In the age of big data, plenty of valuable sensing data have been shared to enhance scientific innovation. However, this may cause unexpected privacy leakage. Although numerous privacy preservation techniques, such as perturbation, encryption, and anonymization, have been proposed to conceal sensitive information, it is usually at the cost of the application utility. Moreover, most of the existing works did not distinguished the underlying factors, such as data features and sampling rate, which contribute differently to utility and privacy information implied in the shared data. To well balance the application utility and privacy leakage for data sharing, we utilize mutual information and visualization techniques to analyze the impact of the underlying factors on utility and privacy, respectively, and design an interactive visualization tool to help users identify the appropriate solution to achieve the objectives of high application utility and low privacy leakage simultaneously. To illustrate the effectiveness of the proposed scheme and tool, accelerometer data collected from mobile devices have been adopted as an illustrative example. Experimental study has shown that feature selection and sampling frequency play dominant roles in reducing privacy leakage with much less reduction on utility, and the proposed visualization tool can effectively recommend the appropriate combination of features and sampling rates that can help users make decision on the trade-off between utility and privacy.

Journal ArticleDOI
TL;DR: The results of empirical research indicate that the three interpersonal attraction factors have positive effects on purchase intention, and the findings provided practitioners with insights into enhancing users’ intention to purchase in social commerce.
Abstract: Based on the stimulus-organism-reaction model, we study the direct effects of the three interpersonal attraction factors (perceived similarity, perceived familiarity, and perceived expertise) on purchase intention in the social commerce era, as well as the mediating roles of the normative and informational influence of reference groups in the above relationship. We apply structural equation model to the study samples consisting of 490 WeChat users. The results of empirical research indicate that the three interpersonal attraction factors have positive effects on purchase intention. Both the normative and informational influence fully mediate the effect of perceived familiarity on purchase intention, but only partially mediate the effects of perceived similarity and perceived expertise on purchase intention. The findings provided practitioners with insights into enhancing users’ intention to purchase in social commerce.

Journal ArticleDOI
TL;DR: The design and verification of the indirect method of predicting the course of CO2 concentration (ppm) from the measured temperature variables Tindoor and the relative humidity rHindoor (%) and the temperature Toutdoor using the Artificial Neural Network with the Bayesian Regulation Method (BRM) confirmed the possibility to use the presence of people of the monitored IAB premises for monitoring.
Abstract: This article describes the design and verification of the indirect method of predicting the course of CO2 concentration (ppm) from the measured temperature variables Tindoor (°C) and the relative humidity rHindoor (%) and the temperature Toutdoor (°C) using the Artificial Neural Network (ANN) with the Bayesian Regulation Method (BRM) for monitoring the presence of people in the individual premises in the Intelligent Administrative Building (IAB) using the PI System SW Tool (PI-Plant Information enterprise information system). The CA (Correlation Analysis), the MSE (Root Mean Squared Error) and the DTW (Dynamic Time Warping) criteria were used to verify and classify the results obtained. Within the proposed method, the LMS adaptive filter algorithm was used to remove the noise of the resulting predicted course. In order to verify the method, two long-term experiments were performed, specifically from February 1 to February 28, 2015, from June 1 to June 28, 2015 and from February 8 to February 14, 2015. For the best results of the trained ANN BRM within the prediction of CO2, the correlation coefficient R for the proposed method was up to 92%. The verification of the proposed method confirmed the possibility to use the presence of people of the monitored IAB premises for monitoring. The designed indirect method of CO2 prediction has potential for reducing the investment and operating costs of the IAB in relation to the reduction of the number of implemented sensors in the IAB within the process of management of operational and technical functions in the IAB. The article also describes the design and implementation of the FEIVISUAL visualization application for mobile devices, which monitors the technological processes in the IAB. This application is optimized for Android devices and is platform independent. The application requires implementation of an application server that communicates with the data server and the application developed. The data of the application developed is obtained from the data storage of the PI System via a PI Web REST API (Application Programming Integration) client.

Journal ArticleDOI
TL;DR: The proposed solution named CrashSafe supports a formal approach that enables one to check the correctness of inter-component communication in Android systems and establish a formal foundation for other tools to assess Android applications’ reliability and safety.
Abstract: Each software application running on Android powered devices consists of application components that communicate with each other to support application's functionality for enhanced user experience of mobile computing. Application components inside Android system communicate with each other using inter-component communication mechanism based on messages called intents. An android application crashes if it invokes an intent that can not be received by (or resolved to) any application on the device. Application crashes represent a severe fault that relates to compromised users' experience, consequently resulting in decreased ratings, usage trends and revenues for such applications. To address this issue--by formally proving crash-safety property of Android applications--we have defined a formal model of Android inter-component communication using Coq theorem prover. The mathematical model defined in theorem prover allows one to prove the properties of inter-component communication system and check the correctness of the proof in an automated way. To demonstrate the significance of the formal model developed, we carried proof of crash-safety of Android applications using Coq tool. The proposed solution named CrashSafe supports a formal approach that enables one to (i) check the correctness of inter-component communication in Android systems and (ii) establish a formal foundation for other tools to assess Android applications' reliability and safety.

Journal ArticleDOI
TL;DR: An enhanced sum rate in the cluster based cognitive radio relay network utilizing a reporting framework in the sequential approach using the n-out-of-k rule is presented and it is shown that the proposed sequential approach with a relay provides a significant sum rate gain compared to the conventional non-sequential approach with no relay.
Abstract: The cognitive radio relay plays a vital role in cognitive radio networking (CRN), as it can improve the cognitive sum rate, extend the coverage, and improve the spectral efficiency. However, cognitive relay aided CRNs cannot obtain a maximal sum rate, when the existing sensing approach is applied to a CRN. In this paper, we present an enhanced sum rate in the cluster based cognitive radio relay network utilizing a reporting framework in the sequential approach. In this approach a secondary user (SU) extends its sensing time until right before the beginning of its reporting time slot by utilizing the reporting framework. Secondly all the individual measurement results from each relay aided SU are passed on to the corresponding cluster head (CH) through a noisy reporting channel, while the CH with a soft-fusion report is forwarded to the fusion center that provides the final decision using the n-out-of-k-rule. With such extended sensing intervals and amplified reporting, a better sensing performance can be obtained than with a conventional non-sequential approach, therefore making it applicable for the future Internet of Things. In addition, the sum rate of the primary network and CCRRN are also investigated for the utilization reporting framework in the sequential approach with a relay using the n-out-of-k rule. By simulation, we show that the proposed sequential approach with a relay (Lemma 2) provides a significant sum rate gain compared to the conventional non-sequential approach with no relay (Lemma 1) under any condition.