scispace - formally typeset
Search or ask a question

Showing papers in "Computing in 2019"


Journal ArticleDOI
TL;DR: This study addresses the multi-objective multi-mode resource-constrained project scheduling problem with payment planning where the activities can be done through one of the possible modes and the objectives are to maximize the net present value and minimize the completion time concurrently.
Abstract: This study addresses the multi-objective multi-mode resource-constrained project scheduling problem with payment planning where the activities can be done through one of the possible modes and the objectives are to maximize the net present value and minimize the completion time concurrently. Moreover, renewable resources including manpower, machinery, and equipment as well as non-renewable ones such as consumable resources and budget are considered to make the model closer to the real-world. To this end, a non-linear programming model is proposed to formulate the problem based on the suggested assumptions. To validate the model, several random instances are designed and solved by GAMS-BARON solver applying the e-constraint method. For the high NP-hardness of the problem, we develop two metaheuristics of non-dominated sorting genetic algorithm II and multi-objective simulated annealing algorithm to solve the problem. Finally, the performances of the proposed solution techniques are evaluated using some well-known efficient criteria.

82 citations


Journal ArticleDOI
TL;DR: Two solution methods of non-dominated sorting genetic algorithm II and multi-objective invasive weed optimization algorithm (MOIWO) are designed to solve theAPP problem and the results obtained from different comparison criteria demonstrate the high quality of the proposed solution methods in terms of speed and accuracy in finding optimal solutions.
Abstract: This paper addresses a robust multi-objective multi-period aggregate production planning (APP) problem based on different scenarios under uncertain seasonal demand. The main goals are to minimize the total cost including in-house production, outsourcing, workforce, holding, shortage and employment/unemployment costs, and maximize the customers’ satisfaction level. To deal with demand uncertainty, robust optimization approach is applied to the proposed mixed integer linear programming model. A goal programming method is then implemented to cope with the multi-objectiveness and validate the suggested robust model. Since APP problems are classified as NP-hard, two solution methods of non-dominated sorting genetic algorithm II (NSGA-II) and multi-objective invasive weed optimization algorithm (MOIWO) are designed to solve the problem. Moreover, Taguchi design method is implemented to increase the efficiency of the algorithms by adjusting the algorithms’ parameters optimally. Finally, several numerical test problems are generated in different sizes to evaluate the performance of the algorithms. The results obtained from different comparison criteria demonstrate the high quality of the proposed solution methods in terms of speed and accuracy in finding optimal solutions.

81 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to investigate the usability of the artificial intelligence in the IoT paradigm, and to achieve the approach, a system called smart-IoT is built based on artificial neural networks, namely, neural networks have been learned by back-propagation algorithm.
Abstract: The Internet of Things (IoT) has emerged as a disruptive technology for the current and future of computing and communication. IoT is characterized by a variety of heterogeneous technologies and devices able to be connected to the Internet. Current and future research and development efforts aim at adding artificial intelligence to IoT systems, enabling devices to become smart and thus make autonomous decisions individually or collectively. Additionally, such smart devices have the ability to interact not only with other smart devices but also with humans. Thus, the aim of this paper is to investigate the usability of the artificial intelligence in the IoT paradigm. To achieve the approach, a system called smart-IoT is built based on artificial neural networks, namely, neural networks have been learned by back-propagation algorithm. The system is tested using mobile devices under Android as smart objects. Experiments with neural networks were carried on certain services (such as auto set alarms for a specific event, or estimating the time to return home). These experiments showed the feasibility of embedding neural networks techniques into the IoT system. The approach allows also for easy adding of new services, which in turn means that smart IoT is a modular and full-fledged system.

74 citations


Journal ArticleDOI
TL;DR: This work develops a fuzzy logic-based approach to detect malicious nodes involved in bad service provisioning and develops a secure messaging system that enables secure communication between nodes that can detect contradictory behaviour attacks and other malicious nodes.
Abstract: Recently, the Internet of things (IoT) has received a lot of attention from both industry and academia. A reliable and secure IoT connection and communication is essential for the proper working of the IoT network as a whole. One of the ways to achieve robust security in an IoT network is to enable and build trusted communication among the things (nodes). In this area, the existing IoT literature faces many critical issues, such as the lack of intelligent cluster-based trust approaches for IoT networks and the detection of attacks on the IoT trust system from malicious nodes, such as bad service providers. The existing literature either does not address these issues or only addresses them partially. Our proposed solution can firstly detect on-off attacks using the proposed fuzzy-logic based approach, and it can detect contradictory behaviour attacks and other malicious nodes. Secondly, we develop a fuzzy logic-based approach to detect malicious nodes involved in bad service provisioning. Finally, to maintain the security of the IoT network, we develop a secure messaging system that enables secure communication between nodes. This messaging system uses hexadecimal values with a structure similar to serial communication. We carried out extensive experimentation under varying network sizes to validate the working of our proposed solution and also to test the efficiency of the proposed methods in relation to various types of malicious behavior. The experiment results demonstrate the effectiveness of our approach under various conditions.

69 citations


Journal ArticleDOI
TL;DR: The gat2vec framework is introduced that uses structural information to generate structural contexts, attributes to generate attribute contexts, and employs a shallow neural network model to learn a joint representation from them and is effective in exploiting multiple sources of information.
Abstract: Network representation learning (NRL) enables the application of machine learning tasks such as classification, prediction and recommendation to networks. Apart from their graph structure, networks are often associated with diverse information in the form of attributes. Most NRL methods have focused just on structural information, and separately apply a traditional representation learning on attributes. When multiple sources of information are available, using a combination of them may be beneficial as they complement each other in generating accurate contexts; moreover, their combined use may be fundamental when the information sources are sparse. The learning methods should thus preserve both the structural and attribute aspects. In this paper, we investigate how attributes can be modeled, and subsequently used along with structural information in learning the representation. We introduce the gat2vec framework that uses structural information to generate structural contexts, attributes to generate attribute contexts, and employs a shallow neural network model to learn a joint representation from them. We evaluate our proposed method against state-of-the-art baselines, using real-world datasets on vertex classification (multi-class and multi-label), link-prediction, and visualization tasks. The experiments show that gat2vec is effective in exploiting multiple sources of information, thus learning accurate representations and outperforming the state-of-the-art in the aforementioned tasks. Finally, we perform query tasks on learned representation and show how the qualitative analysis of results has better performance as well.

55 citations


Journal ArticleDOI
TL;DR: A new local ranking measure is proposed to identify the influence of a node based on their important location parameters such as node degree, the degree of its neighbors, common links between a node and its neighbors and inverse cluster coefficient that outperforms the compared well-known centralities.
Abstract: Spreaders detection is a vital issue in complex networks because spreaders can spread information to a massive number of nodes in the network. There are many centrality measures to rank nodes based on their ability to spread information. Some local and global centrality measures including DIL, degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, PageRank centrality and k-shell decomposition method are used to identify spreader nodes. However, they may have some problems such as finding inappropriate spreaders, unreliable spreader detection, higher time complexity or incompatibility with some networks. In this paper, a new local ranking measure is proposed to identify the influence of a node. The proposed method measures the spreading ability of nodes based on their important location parameters such as node degree, the degree of its neighbors, common links between a node and its neighbors and inverse cluster coefficient. The main advantage of the proposed method is to clear important hubs and low-degree bridges in an efficient manner. To test the efficiency of the proposed method, experiments are conducted on eight real and four synthetic networks. Comparisons based on Susceptible Infected Recovered and Susceptible Infected models reveal that the proposed method outperforms the compared well-known centralities.

50 citations


Journal ArticleDOI
TL;DR: An overview of the evolution of this field since 2010 is provided by performing a systematic mapping study considering 423 primary sources and found six different variability facets where the AAFM is being applied that define the tendencies.
Abstract: Feature models have been used since the 90s to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of automated analysis of feature models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.

48 citations


Journal ArticleDOI
TL;DR: While modern parallel computing systems offer high performance, utilizing these powerful computing resources to the highest possible extent demands advanced knowledge of various hardware architectu... as discussed by the authors, which requires advanced knowledge from various hardware architectures.
Abstract: While modern parallel computing systems offer high performance, utilizing these powerful computing resources to the highest possible extent demands advanced knowledge of various hardware architectu ...

34 citations


Journal ArticleDOI
TL;DR: An analysis of how WebRTC topologies affect the quality perceived by users is provided and a group of Key Performance Indicators for estimating the QoE of Web RTC users is proposed.
Abstract: WebRTC comprises a set of technologies and standards that provide real-time communication with web browsers, simplifying the embedding of voice and video communication in web applications and mobile devices. The perceived quality of WebRTC communication can be measured using quality of experience (QoE) indicators. QoE is defined as the degree of delight or annoyance of the user with an application or service. This paper is focused on the QoE assessment of WebRTC-based applications and its contribution is threefold. First, an analysis of how WebRTC topologies affect the quality perceived by users is provided. Second, a group of Key Performance Indicators for estimating the QoE of WebRTC users is proposed. Finally, a systematic survey of the literature on QoE assessment in the WebRTC arena is presented.

31 citations


Journal ArticleDOI
TL;DR: The results of comparing the proposed centrality measure with other measures in terms of epidemic models, Kendall’s tau correlation coefficient and Rank-Frequency measures indicated that the proposed method outperforms the other compared centrality measures.
Abstract: In recent years, notable number of research studies have been conducted on the analysis of diffusion process in complex networks. One fundamental problem in this domain is to find the most influential spreader nodes. For achieving a successful spreading process, nodes having high spreading ability should be selected as spreaders. Many centrality measures have been proposed for determining and ranking the significance of nodes and detecting the best spreaders. The majority of proposed centrality measures require network global information which leads to high time complexity. Moreover, with the advent of large-scale complex networks, there is a critical need for improving accurate measures through using nodes’ local information. On the other hand, most of the formerly proposed centrality measures have attempted to select core nodes as spreaders but global bridge nodes have the highest spreading ability since they are located among several giant communities of the network. In this study, a new local and parameter-free centrality measure is proposed which is aimed at finding global bridge nodes in the network. Hence, two new local metrics, namely edge ratio and neighborhood diversity, are firstly defined which are used in the proposed method. Considering edge ratio of neighbors ensures that the selected node be not in the periphery location of the network. Furthermore, a node with high neighborhood diversity is likely a connector between some modules (dense parts) of the network. Therefore, a node with a high edge ratio and more diverse neighborhood has high spreading ability. The major merits of the proposed measure are near-linear time complexity, using local information and being parameter-free. For evaluating the proposed method, we conducted experiments on real-world networks. The results of comparing the proposed centrality measure with other measures in terms of epidemic models (SIR and SI), Kendall’s tau correlation coefficient and Rank-Frequency measures indicated that the proposed method outperforms the other compared centrality measures.

28 citations


Journal ArticleDOI
TL;DR: This paper presents a formal approach to CAAC for dynamically specifying the contextual roles based on the relevant contextual conditions derived from information provided through IoT, and introduces an ontology-based approach which models the dynamic contextual roles and its associated access control policies.
Abstract: New opportunities have been created for the management of critical situations utilizing the Internet of Things (IoT). However, one of the difficulties in providing services for critical situation management using IoT is that access will often be needed by users at the critical events, where access to data and resources is usually restricted by means of their normal roles. In Role-Based Access Control, these roles are organized in static hierarchies and users are authorized to play such roles in order to exercise their organizational functions. However, some of these roles cannot be organized in the same way in static hierarchies as the authorizations granted to such roles directly correspond to the dynamic contextual conditions (e.g., body sensors data). Users need to satisfy these conditions to exercise the functions of such dynamic contextual roles. These dynamic conditions can be effectively derived from the IoT devices in order to manage the critical situations. However, a large number of static roles and contextual conditions has led to the high administrative and processing overheads. In this paper, we present a formal approach to CAAC for dynamically specifying the contextual roles based on the relevant contextual conditions derived from information provided through IoT. We also introduce an ontology-based approach which models the dynamic contextual roles and its associated access control policies. We demonstrate the feasibility of our proposal by providing a walkthrough of the whole mechanism. We also carry out an experimental study on the performance of our approach compared to our previous approach.

Journal ArticleDOI
TL;DR: The results indicate that the introduced fusion approach using a CROPC performs significantly better in terms of detection accuracy, in comparison to other state-of-the-art classifiers.
Abstract: Side swipe accidents occur primarily when drivers attempt an improper lane change, drift out of lane, or the vehicle loses lateral traction. In this paper, a fusion approach is introduced that utilizes multiple differing modality data, such as video data, GPS data, wheel odometry data, potentially IMU data collected from data logging device (DL1 MK3) for detecting driver’s behavior of lane changing by using a novel dimensionality reduction model, collaborative representation optimized projection classifier (CROPC). The criterion of CROPC is maximizing the collaborative representation based between-class scatter and minimizing the collaborative representation based within-class scatter in the transformed space simultaneously. For lane change detection, both feature-level fusion and decision-level fusion are considered. In the feature-level fusion, features generated from multiple differing modality data are merged before classification while in the decision-level fusion, an improved Dempster–Shafer theory based on correlation coefficient, DST-CC is presented to combine the classification outcomes from two classifiers, each corresponding to one kind of the data. The results indicate that the introduced fusion approach using a CROPC performs significantly better in terms of detection accuracy, in comparison to other state-of-the-art classifiers.

Journal ArticleDOI
TL;DR: This paper developed the first lattice-based conditional privacy-preserving authentication (LB-CPPA) protocol for VANETs, and shows that the proposal endorses the message integrity and authentication as well as the privacy preservation at the same time.
Abstract: With the rapid increase in the internet technologies, Vehicular Ad hoc Networks (VANETs) are identified as a crucial primitive for the vehicular communication in which the moving vehicles are treated as nodes to form a mobile network. To ameliorate the efficiency and traffic security of the communication, a VANET can wirelessly circulate the traffic information and status to the participating vehicles (nodes). Before deploying a VANET, a security and privacy mechanism must be implemented to assure the secure communication. Due to this issue, a number of conditional privacy-preserving authentication schemes are proposed in the literature to guarantee the mutual authentication and privacy protection. However, most of these schemes use the Diffie–Hellman (DH) problems to secure the communication. Note that, these DH-type problems can be solved in polynomial-time in the presence of new modern technologies like quantum computers. Therefore, to remove these difficulties, we motivated to attempt a non-DH type conditional privacy-preserving authentication scheme which can resist the quantum computers. In this paper, we developed the first lattice-based conditional privacy-preserving authentication (LB-CPPA) protocol for VANETs. A random oracle model is used to analyze the security of proposed protocol. The security of our LB-CPPA scheme is based on the complexity of lattice problems. By security analysis, we show that our proposal endorses the message integrity and authentication as well as the privacy preservation at the same time. A security comparison of our claim is also done. Further, we analyze the performance of the proposed scheme and compare it with the DH-type schemes.

Journal ArticleDOI
TL;DR: An integrated framework for enhancing personalized mobile cloud-services, based on a composition approach that adapts context information using a common model of service metadata specification is proposed.
Abstract: Cloud service providers typically compose their services from a number of elementary services, which are developed in-house or built by third-party providers. Personalization of composite services in mobile environments is an interesting and challenging issue to address, given the opportunity to factor-in diverse user preferences and the plethora of mobile devices at use in multiple contexts. This work proposes a framework to address personalization in mobile cloud-service provisioning. Service personalization and adaptation may be considered at different levels, including the user profile, the mobile device in use, the context of the user and the composition specification. The user’s mobile device and external services are typical sources of context information, used in our proposed algorithm to elicit context-aware services. The selection process is guided by quality-of-context criteria that combine cloud-service provider requirements and user preferences. Hence, the paper proposes an integrated framework for enhancing personalized mobile cloud-services, based on a composition approach that adapts context information using a common model of service metadata specification.

Journal ArticleDOI
TL;DR: This paper addresses problems with Omada, a BFT system architecture that is able to benefit from additional hardware resources and the fact that not all replicas need to take part in every group enables the architecture to exploit additional servers.
Abstract: When provided with more powerful or extra hardware, state-of-the-art Byzantine fault-tolerant (BFT) replication protocols are unable to effectively exploit the additional computing resources: on the one hand, in settings with heterogeneous servers existing protocols cannot fully utilize servers with higher performance capabilities. On the other hand, using more servers than the minimum number of replicas required for Byzantine fault tolerance in general does not lead to improved throughput and latency, but instead actually degrades performance. In this paper, we address these problems with Omada, a BFT system architecture that is able to benefit from additional hardware resources. To achieve this property while still providing strong consistency, Omada first parallelizes agreement into multiple groups and then executes the requests handled by different groups in a deterministic order. By varying the number of requests to be ordered between groups as well as the number of groups that a replica participates in between servers, Omada offers the possibility to individually adjust the resource usage per server. Moreover, the fact that not all replicas need to take part in every group enables the architecture to exploit additional servers.

Journal ArticleDOI
TL;DR: An accurate partially attracted firefly algorithm (PaFA) is proposed by adopting a partial attraction model and a fast attractiveness calculation strategy and demonstrates the good performance of PaFA in terms of the solution accuracy compared with two state-of-the-art FA variants and two other bio-inspired algorithms.
Abstract: The firefly algorithm (FA) is a new and powerful algorithm for optimization However, it has the disadvantages of high computational complexity and low convergence accuracy, especially when solving complex problems In this paper, an accurate partially attracted firefly algorithm (PaFA) is proposed by adopting a partial attraction model and a fast attractiveness calculation strategy The partial attraction model can preserve swarm diversity and make full use of individual information The fast attractiveness calculation strategy ensures information sharing among the individuals and it also improves the convergence accuracy The experimental results demonstrate the good performance of PaFA in terms of the solution accuracy compared with two state-of-the-art FA variants and two other bio-inspired algorithms

Journal ArticleDOI
TL;DR: This paper presents MEC enhanced mobility support system for IoT system, which can be implemented with low operation cost and overhead and can protect IoT system’s user from being exploited due to the exposed network location of data uploading and the control signals.
Abstract: Communications among IoT devices and backend servers are among the most critical aspects of IoT system, as it not only governs hardware requirement and system performance but also impacts security and privacy of the system. Multi-access edge computing (MEC) is an enabler for service development in 5G. Furthermore, MEC will reduce IoT system communication cost such as latency and bandwidth thanks to its ubiquitously nearby processing and storage capability. Also, MEC will provide opportunities to enhance IoT system’s network privacy, which cannot be solved by IoT system itself easily. In this paper, we present MEC enhanced mobility support system for IoT system: with MEC it can be implemented with low operation cost and overhead and can protect IoT system’s user from being exploited due to the exposed network location of data uploading and the control signals.

Journal ArticleDOI
TL;DR: An ABE scheme called efficient and revocable storage CP-ABE scheme with outsourced decryption and constant-size ciphertexts and secret keys is proposed and is provably secure against selectively chosen-ciphertext attack.
Abstract: As a special kind of public-key encryption, attribute-based encryption (ABE) is able to achieve fine-grained access control mechanism by offering one-to-many encryption. Due to such unique characteristic, this primitive is widely employed in the cloud computing environment to provide flexible and secure data sharing. However, how to revoke the access privilege of a user to access encrypted data stored in cloud servers is challenging. Furthermore, the complex operation of ABE may cause a huge computational cost and is usually considered to be a heavy burden for system users. Motivated by the practical needs, an ABE scheme called efficient and revocable storage CP-ABE scheme with outsourced decryption and constant-size ciphertexts and secret keys is proposed in this paper. Our scheme offers the following features: In addition, the scheme is provably secure against selectively chosen-ciphertext attack. At the end, we describe how to deploy the scheme in cloud computing environment.

Journal ArticleDOI
TL;DR: The proposed GACO performs better in comparison to three different meta-heuristic algorithms namely as Genetic Algorithm, Particle Swarm Optimization and Artificial Bee Colony and optimizes the cloud resource allocation problem and gives better results with unknown search spaces.
Abstract: This work presents a swarm-based meta-heuristic technique known as Generalized Ant Colony Optimizer (GACO). It is a hybrid approach which consists of Simple Ant Colony Optimization and Global Colony Optimization concepts. The main concept behind GACO is the foraging behavior of ants. GACO operates in the following four phases: Creation of a new colony, search of nearest food location, balance the solution, and updating of pheromone. GACO has been tested on seventeen well recognized standard benchmark functions and its results have been compared with three different meta-heuristic algorithms namely as Genetic Algorithm, Particle Swarm Optimization and Artificial Bee Colony. The performance metrics such as average and standard deviation are computed and evaluated with respect to these metrics. The proposed GACO performs better in comparison to the aforementioned algorithms. The proposed algorithm optimizes the cloud resource allocation problem and gives better results with unknown search spaces.

Journal ArticleDOI
TL;DR: The wildcard mask is used to implement the load balance method directly on switches or routers and a user prediction mechanism is added to change the range of the wildcard masks dynamically to prove that the proposed method can be implemented to varieties of service platforms.
Abstract: With the advent of OpenFlow, the concept of Software-Defined Networking (SDN) becomes much popular. In the past, SDN had often been used for network virtualization; however, with the rise of OpenFlow, which speeds up network performance by separating the control layer from the data layer, SDN can be further used to manage physical network facilities. Currently, some OpenFlow controller providers have already provided users with load balancer packages in their controllers for virtual networks, such as the Neutron package in OpenStack; nevertheless, the existing load balancer packages work in the old fashion that causes extra delay since they poll controllers for every new coming connection. In this paper, we use the wildcard mask to implement the load balance method directly on switches or routers and add a user prediction mechanism to change the range of the wildcard mask dynamically. In this way, the load balance mechanism can be applied conforming to real service situations. In our experiment, we test the accuracies of flow prediction for different predicted algorithms and compare the delay times and balance situations of the proposed method with other load balancers. With the popularity of cloud computing, the demand for cloud infrastructure also increases. As a result, we also apply our load balance mechanism on cloud services and prove that the proposed method can be implemented to varieties of service platforms.

Journal ArticleDOI
TL;DR: An extensive overview of the challenges that arise when testing large IoT applications at the system level, synchronization between real-life and simulation environment and the scalability constraints of modern simulation techniques is presented.
Abstract: This paper presents an extensive overview of the challenges that arise when testing large IoT applications at the system level. In order do that we start from analyzing behavior of local entities such as IoT devices or people interacting with the IoT system. The interactions of these local entities eventually leads to an emergent behavior. Both the emergent behavior and the local behavior need to be taken into account when testing IoT systems. Therefore, we present a novel hybrid simulation based testing approach that is able to effectively facilitate interactions of these local entities. Furthermore, we introduce various solutions to the challenges that arise when implementing this hybrid methodology. These challenges are mainly related to the IoT development pipeline, synchronization between real-life and simulation environment and the scalability constraints of modern simulation techniques.

Journal ArticleDOI
TL;DR: It is revealed that combining the outputs of several ASATs does not always improve the vulnerability detection performance over a single ASAT, and by using the procedure a developer is able to choose which is the best combination of AsATs that fits better in the project requirements.
Abstract: Automated Static Analysis Tool (ASATs) are one of the best ways to search for vulnerabilities in applications, so they are a resource widely used by developers to improve their applications. However, it is well-known that the performance of such tools is limited, and their detection capabilities may not meet the requirements of the project regarding the criticality of the application. Diversity is an obvious direction to take to improve the true positives, as different tools usually report distinct vulnerabilities, however with the cost of also increasing the false positives, which may be unacceptable in some scenarios. In this paper, we study the problem of combining diverse ASATs to improve the overall detection of vulnerabilities in web applications, considering four development scenarios with different criticality goals and constraints. These scenarios range from low budget to high-end (e.g., business critical) web applications. We tested with five ASATs under two datasets, one with real WordPress plugins and another with synthetic test cases. Our findings revealed that combining the outputs of several ASATs does not always improve the vulnerability detection performance over a single ASAT. By using our procedure a developer is able to choose which is the best combination of ASATs that fits better in the project requirements.

Journal ArticleDOI
TL;DR: The extensive simulations are carried out to show the effectiveness of proposed schemes in terms of PDR, energy consumption, and load distribution against the baseline scheme.
Abstract: The imbalance energy consumption and high data traffic at intermediate nodes degrade the network performance. In this paper, we propose: energy grade and balance load distribution corona, EG without corona and DA without corona based schemes to distribute data traffic across the network nodes for efficient energy consumption. The dynamic adjustment of transmission range in first scheme helps in reducing data load. Additionally, the transmission range is purely based on distance, energy and data load of the receiver node for achieving maximum network lifetime. Second scheme divides a data packet into three fractions; small, medium and large for transmitting from various paths to evenly distribute the data load on the network nodes. In third scheme, depth adjustment of void node is performed to resume network operations, whereas, the load distribution and transmission range mechanisms are the same. The extensive simulations are carried out to show the effectiveness of proposed schemes in terms of PDR, energy consumption, and load distribution against the baseline scheme.

Journal ArticleDOI
TL;DR: A distributed scheme for securing the quality of data as collected and stored in multiple partitions and determines if the incoming data negatively affect the accuracy of the already present datasets and when this is the case, it excludes them from further processing.
Abstract: The involvement of numerous devices and data sources in the current form of Web leads to the collection of vast volumes of data. The advent of the Internet of Things (IoT) enhances the devices to act autonomously and transforms them into information and knowledge producers. The vast infrastructure of the Web/IoT becomes the basis for producing data either in a structured or in an unstructured way. In this paper, we focus on a distributed scheme for securing the quality of data as collected and stored in multiple partitions. A high quality is achieved through the adoption of a model that identifies any change in the accuracy of the collected data. The proposed scheme determines if the incoming data negatively affect the accuracy of the already present datasets and when this is the case, it excludes them from further processing. We are based on a scheme that also identifies the appropriate partition where the incoming data should be allocated. We describe the proposed scheme and present simulation and comparison results that give insights on the pros and cons of our solution.

Journal ArticleDOI
TL;DR: A general learning feature quantification method to convert the raw data from e-learning systems into sets of independent learning features and a weighted avg-pooling is chosen instead of typical max-pooled in a novel convolutional GRU network for learning performance prediction.
Abstract: Students learning performance prediction is a challenging task due to the dynamic, virtual environments and the personalized needs for different individuals. To ensure that learners’ potential problems can be identified as early as possible, this paper aim to develop a predictive model for effective learning feature extracting, learning performance predicting and result reasoning. We first proposed a general learning feature quantification method to convert the raw data from e-learning systems into sets of independent learning features. Then, a weighted avg-pooling is chosen instead of typical max-pooling in a novel convolutional GRU network for learning performance prediction. Finally, an improved parallel xNN is provided to explain the prediction results. The relevance of positive/negative between features and result could help students find out which part should be improved. Experiments have been carried out over two real online courses data. Results show that our proposed approach performs favorably compared with several other state-of-the-art methods.

Journal ArticleDOI
TL;DR: This work aimed to construct an efficient and reliable classifier to identify HBPs accurately, and indicated that 90.6% of accuracy was achieved in five-fold cross validation, which was superior to that of other published methods.
Abstract: Hormone-binding proteins (HBPs) are important soluble carriers for growth hormones, and correct recognition of HBPs is crucial to understanding their functions. Therefore, we aimed to construct an efficient and reliable classifier to identify HBPs accurately. At first, 246 proteins were collected from UniProt database and considered as the objective benchmark dataset. We employed the 8000-dimensional feature extraction method based on tripeptide compositions to formulate protein samples. Subsequently, we alleviated the intricate feature set by utilizing ANOVA, a feature ranking technique, and acquired the optimal feature subset devoid of redundant information. Furthermore, we utilized three classification methods to process the selected tripeptide features, which generated three probability sequences. Finally, the three probability sequences were considered as new features, and addressed by the support vector machine to construct a prediction model. Results indicated that 90.6% of accuracy was achieved in five-fold cross validation, which was superior to that of other published methods.

Journal ArticleDOI
TL;DR: A Heterogeneous Distributed Precise Coverage Rate (HDPCR) mechanism that calculates the covered area of a Heterogeneity Wireless Sensor Network by using a localized mechanism that detects boundary detection mechanisms and calculates its area precisely.
Abstract: As the Internet of Things (IoT) evolves, more and more Wireless Sensor Networks (WSNs) are being deployed in the real world. Connected vehicles, smart grids, smart cities, smart healthcare, networks of robots, and disaster recovery networks are some examples. In WSNs, the area coverage is one of the most important quality of service metrics. A WSN without enough area coverage yields incorrect results. So calculating the covered area of a WSN is mandatory. Previous studies have used a simple approach: all nodes send their location to the sink, and it calculates the covered area centrally which makes huge unnecessary communication overhead. In our previous work titled Distributed Exact Coverage Rate Calculation, we calculated the covered area of a homogenous WSN in a distributed manner. In this paper, we provide a Heterogeneous Distributed Precise Coverage Rate (HDPCR) mechanism that calculates the covered area of a Heterogeneous Wireless Sensor Network by using a localized mechanism. With the use of boundary detection mechanisms, the HDPCR detects the boundary of the network and calculates its area. HDPCR also detects holes and calculates their area precisely. By subtracting these two calculated values, the covered area of the network can be computed. Many related studies have evaluated the coverage rate approximately with error and require more calculations to reduce the error rate. HDPCR calculates the coverage rate precisely without an error rate using simple arithmetic calculations. The exhaustive simulation also shows the superiority of HDPCR as compared to the previous approaches.

Journal ArticleDOI
TL;DR: This paper presents a survey of data aggregation processes in a variety of application domains from literature, and investigates their common and variable features, which serves as the basis of a previously proposed taxonomy called DAGGTAX.
Abstract: Data aggregation processes are essential constituents for data management in modern computer systems, such as decision support systems and Internet of Things systems, many with timing constraints. Understanding the common and variable features of data aggregation processes, especially their implications to the time-related properties, is key to improving the quality of the designed system and reduce design effort. In this paper, we present a survey of data aggregation processes in a variety of application domains from literature. We investigate their common and variable features, which serves as the basis of our previously proposed taxonomy called DAGGTAX. By studying the implications of the DAGGTAX features, we formulate a set of constraints to be satisfied during design, which helps to check the correctness of the specifications and reduce the design space. We also provide a set of design heuristics that could help designers to decide the appropriate mechanisms for achieving the selected features. We apply DAGGTAX on industrial case studies, showing that DAGGTAX not only strengthens the understanding, but also serves as the foundation of a design tool which facilitates the model-driven design of data aggregation processes.

Journal ArticleDOI
TL;DR: This paper shows a scheme to solve the problem of group sum evaluation over outsourced data stream collected by IoT devices is not secure, and gives two improved schemes which can resist the attack and analyze their security.
Abstract: With the continuous development of the internet of things (IoT) technology, large amount of data has been generated by lots of IoT devices which require large-scale data processing technologies and storage technologies. Cloud computation is a paradigm for handling such massive data. With the help of cloud computing, IoT devices can utilize the data more efficiently, conveniently and faster. Therefore, how to promote the better integration of the IoT and cloud computing is an interesting research problem. In the big data era, group sum evaluation over outsourced data stream collected by IoT devices is an essential building block in many stream applications, such as statistical monitoring, data mining, machine learning and so on. Thus it is very valuable to design a mechanism to verify the correctness of the group sum evaluation over the outsourced data streams, especially when the data streams are originated from multiple data sources. Recently, Liu et al. proposed such a scheme to solve this problem. However in this paper, we show their scheme is not secure. Concretely, the adversary can easily forge tags for outsourced data, thus the correctness of the group sum evaluation can not be guaranteed anymore. Furthermore, we give two improved schemes which can resist our attack and analyze their security. Finally, we roughly evaluate the performance of our two improved schemes. Our first scheme almost shares the same efficiency as Liu et al.’s proposal but with no security flaw, the second scheme shares the same structure with Liu et al.’s proposal and can be compatible with the existing composite order bilinear pairing cryptosystem.

Journal ArticleDOI
TL;DR: This paper introduces a new approach to online traffic analysis based on a pattern-based representation for high-level summarization of the traffic measurement data and employs clustering with its benefit of the aggregation of multidimensional variables.
Abstract: Traffic analysis is a core element in network operations and management for various purposes including change detection, traffic prediction, and anomaly detection. In this paper, we introduce a new approach to online traffic analysis based on a pattern-based representation for high-level summarization of the traffic measurement data. Unlike the past online analysis techniques limited to a single variable to summarize (e.g., sketch), the focus of this study is on capturing the network state from the multivariate attributes under consideration. To this end, we employ clustering with its benefit of the aggregation of multidimensional variables. The clustered result represents the state of the network with regard to the monitored variables, which can also be compared with the observed patterns from previous time windows enabling intuitive analysis. We demonstrate the proposed method with two popular use cases, one for estimating state changes and the other for identifying anomalous states, to confirm its feasibility. Our extensive experimental results with public traces and collected monitoring measurements from ESnet traffic traces show that our pattern-based approach is effective for multivariate analysis of online network traffic with visual and quantitative tools.