scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Web and Grid Services in 2019"


Journal ArticleDOI
TL;DR: A fault-tolerant TBFC (FTBFC) model with a minimum energy (ME) algorithm to select a new parent fog node whose energy consumption is minimum and the energy consumption and execution time of a newparent fog node can be reduced by the ME algorithm.
Abstract: In the fog computing model of the IoT, subprocesses of an application process to handle sensor data are performed on fog nodes. Since the IoT is scalable, we have to reduce the electric energy consumption. In the tree-based fog computing (TBFC) model, fog nodes are hierarchically structured. In this paper, we propose a fault-tolerant TBFC (FTBFC) model. In addition, we newly propose a pair of fault-tolerant strategies. In one data transmission strategy, data processed by disconnected fog nodes is sent to a new parent fog node. Here, we propose a minimum energy (ME) algorithm to select a new parent fog node whose energy consumption is minimum. In another subprocess transmission strategy, the subprocess of the faulty fog node is sent to another fog node. In the evaluation, the energy consumption and execution time of a new parent fog node can be reduced by the ME algorithm.

26 citations


Journal ArticleDOI
TL;DR: This work proposes and implements an intelligent hybrid simulation system based on particle swarm optimisation (PSO) and simulated annealing (SA) for node placement problem in WMNs, called WMN-PSOSA, and evaluates the performance of the system considering different distributions of mesh clients such as Weibull, chi-square, and uniform distributions.
Abstract: Networks of today are going through a rapid evolution. Recently, many new networks, especially wireless networks, are appearing. The wireless mesh networks (WMNs) are wireless networks having mesh topology. They have many advantages, such as high robustness and easiness for maintenance. However, WMNs have some problems that need to be solved. One important problem for WMNs is the node placement problem, which is an NP-hard problem. In this work, we propose and implement an intelligent hybrid simulation system based on particle swarm optimisation (PSO) and simulated annealing (SA) for node placement problem in WMNs, called WMN-PSOSA. We evaluate the performance of WMN-PSOSA system considering different distributions of mesh clients such as Weibull, chi-square, and uniform distributions. The simulation results show that the WMN-PSOSA performs better for Weibull distribution compared with the case of chi-square and uniform distributions.

16 citations


Journal ArticleDOI
TL;DR: This paper model the issue of uncertain QoS-aware WSC via interval number and translate it into a multi-objective optimisation problem with global QoS constraints of user's preferences and demonstrates that the proposed approach can effectively and efficiently find optimum composite service solutions set with satisfactory convergence.
Abstract: Quality of service (QoS)-aware web service composition (QWSC) has recently become one of the most challenging research issues. Although much work has been investigated, they mainly focus on certain QoS of web services, while QoS with uncertainty exposes the most important characteristic in real and highly dynamic environment. In this paper, with the consideration of uncertain service QoS and user's preferences, we model the issue of uncertain QoS-aware WSC via interval number and translate it into a multi-objective optimisation problem with global QoS constraints of user's preferences. The encoded optimisation problem is solved by an non-deterministic multi-objective evolutionary algorithm, which exploits new genetic encoding schema, the strategy of crossover and uncertain interval Pareto comparison. To validate the feasibility, large-scale experiments have been conducted on simulated datasets. The results demonstrate that our proposed approach can effectively and efficiently find optimum composite service solutions set with satisfactory convergence.

13 citations


Journal ArticleDOI
TL;DR: A taxonomy of these approaches based on the key issues in online astroturfing detection techniques is presented and the relevant approaches in each category are discussed.
Abstract: Astroturfing is one of the most impactful threats on today's internet. It is the process of masking and portraying a doctored message to the general population in a way as though it originated from the grass-root level. The concept of astroturfing detection is started to gain popularity among researchers in social media, e-commerce and politics. With the recent growth of crowdsourcing systems, astroturfing is also creating a profound impact on people's opinions. Political blogs, news portals and review websites are being flooded with astroturfs. Some groups are using astroturfing to promote their interest and some are using it to demote the interest of competitors. Researchers have adopted many approaches to detect astroturfing on the web. These approaches include content analysis techniques, individual and group identification techniques, analysing linguistic features, authorship attribution techniques, machine learning and so on. We present a taxonomy of these approaches based on the key issues in online astroturfing detection techniques and discuss the relevant approaches in each category. The paper also summarises the discussed literature and highlights research challenges and directions for future work that have not aligned with the currently available research.

8 citations


Journal ArticleDOI
TL;DR: By modifying task queues so that they can hold task objects instead of pointers, this paper manages to increase the performance more than 2.5 times on CPU-bound applications and decrease last-level cache misses up to 30% compared to Intel TBB and Intel/MIT Cilk work-stealing schedulers.
Abstract: Work-stealing is one of the popular ways to schedule near-optimal task distribution across multiple CPU cores with low overheads on time, memory and inter-thread synchronisations. In the work-stealing strategy, workers that run out of tasks for execution start claiming tasks from other workers' queues. Double ended queues (deques) based on circular arrays proved to be an effective solution for such scenario. In this paper we investigate ways to improve performance of work-stealing schedulers based on deques by enhancing internal data handling mechanisms. Traditionally, deques are designed with an assumption that task pointers are stored within these data structures, while task objects reside in the heap memory. By modifying task queues so that they can hold task objects instead of pointers we managed to increase the performance more than 2.5 times on CPU-bound applications and decrease last-level cache misses up to 30% compared to Intel TBB and Intel/MIT Cilk work-stealing schedulers.

7 citations


Journal ArticleDOI
TL;DR: A fog computing layer between the cloud and clients to provide energy management service in near real-time with optimised computing cost is introduced and simulation validate the efficient system operation cost, processing and RT for power consumers.
Abstract: The latency issue of cloud based system can degrade smart grid's real-time applications. This paper introduces a fog computing layer between the cloud and clients to provide energy management service in near real-time with optimised computing cost. In a region, in proposed system model, two clusters of residential buildings have access to two fogs for processing of their energy requests. A hybrid service broker policy and modified honey bee colony optimisation algorithm are proposed for efficient fog selection and balancing requests' load on virtual machines in the fog. The micro grids (MGs) are introduced in the system model between the fogs and the clusters for uninterrupted and cost-efficient power supply. The recurring cost of MGs and computing cost of the fogs make the system operation cost, which is payable by the consumers. The simulation validate the efficient system operation cost, processing and RT for power consumers.

6 citations


Journal ArticleDOI
TL;DR: An approach for web service composition that combines overall QoS and modified graphplan is proposed and it is shown that this approach has better solutions compared with original graphplan and some other approaches.
Abstract: Increasing emphasis on users' preferences and the growth of services on the web make service composition a time consuming and complicated work. In this paper, an approach for web service compositio...

4 citations


Journal ArticleDOI
TL;DR: The smart tableware consisting of an acceleration sensor and a pressure sensor to obtain meal information such as meal sequence and meal content automatically and feature extraction is performed on the meal information captured by sensors, and the machine learning algorithms used to analyse and process the information.
Abstract: In recent years, due to lifestyle-related diseases, people have paid more and more attention to the management of healthy meals. Some meal management systems are entering people's lives gradually. Existing studies have found that the proper meal habits, such as a correct meal sequence, can help prevent disease to a certain extent. In this paper, we introduce the smart tableware consisting of an acceleration sensor and a pressure sensor to obtain meal information such as meal sequence and meal content automatically. Moreover, feature extraction is performed on the meal information captured by sensors, and the machine learning algorithms are used to analyse and process the information. Finally, the meal content and meal sequence are fed back to the user to help people prevent diseases affected by lifestyle habits such as obesity and diabetes. In the experiment, we compare a variety of different machine learning algorithms and analyse the experimental results.

3 citations


Journal ArticleDOI
TL;DR: This work proposes SAW-Q, an extension of simple additive weighting (SAW), as a novel dynamic composition technique that follows the principles of the REST style, and models quality attributes as a function of the actual service demand instead of the traditional constant values.
Abstract: Service composition is one of the principles of service-oriented architecture; it enables reuse and allows developers to combine existing services in order to create new services that in turn can be part of another composition. Dynamic composition requires that service components are chosen from a set of services with equal or similar functionality at runtime and possibly automatically. The adoption of the REST services in the industry has led to a growing number of services of this type, many with similar functionality. The existing dynamic composition techniques are method-oriented whereas REST is resource-oriented and consider only traditional (WSDL/SOAP) services. We propose SAW-Q, an extension of simple additive weighting (SAW), as a novel dynamic composition technique that follows the principles of the REST style. Additionally, SAW-Q models quality attributes as a function of the actual service demand instead of the traditional constant values. Our model is much more accurate when compared to real implementation, positively improving the quality of dynamic service compositions.

1 citations


Journal ArticleDOI
TL;DR: A public auditing protocol for data integrity based on adjacency-hash-table is given and it is found the data blocks' tags can be easily forged in their proposal, and thus the cloud servers can loss datum but still has the ability to give correct proof for data position.
Abstract: Nowadays, cloud storage is a popular service for many data owners, they prefer to outsource their datum to the cloud servers. However the cloud servers maybe sometimes loss datum due to accidents. Thus, the integrity of the outsourced datum need to be ensured by the data owners or even any other third parties publicly. Recently in the mobile cloud computing setting, Chen et al. proposed a public auditing protocol for data integrity based on adjacency-hash-table. However we find the data blocks' tags can be easily forged in their proposal, and thus the cloud servers can loss datum but still has the ability to give correct proof for data position, which breaks the security of their proposal. We show two concrete attacks to their proposal and give an improved public auditing protocol for cloud storage integrity checking and roughly analysis its security.

1 citations


Journal ArticleDOI
TL;DR: This paper aims to bridge the gap by proposing a new approach for service selection based on DFD to assist organisations in speeding up the process of migrating their legacy systems to SOA.
Abstract: There are many service identification methods (SIMs) to simplify service identification in SOA lifecycle. These SIMs vary in terms of their features (e.g., input artefact, technique). Due to this diversity, few evaluation frameworks have been proposed to guide organisations in selecting a suitable SIM based on their available input artefacts (e.g., source code, business process). This research concerns with SIMs that consider data flow diagram (DFD) as an input artefact, in order to migrate two legacy systems, modelled with DFD, to SOA. Only two SIMs are found in the literature to identify services based on DFD. However, these SIMs do not provide a way to select among the services identified to be implemented as web services. Therefore, this paper aims to bridge this gap by proposing a new approach for service selection based on DFD to assist organisations in speeding up the process of migrating their legacy systems to SOA.

Journal ArticleDOI
TL;DR: This is the first and realistic attempt to study the DaaS in a strategic setting and the proposed approach under various simulation scenarios is evaluated to judge on its usefulness and efficiency.
Abstract: Data-as-a-service (DaaS) is the next emerging technology in cloud computing research. Small clouds operating as a group may exploit the DaaS efficiently to perform the substantial amount of work. In this paper, an auction framework is studied and evaluated when the small clouds are strategic in nature. We present the system model and formal definition of the problem and its experimental evaluation. Several auction DaaS-based mechanisms are proposed and their correctness and computational complexity is analysed. To the best of our knowledge, this is the first and realistic attempt to study the DaaS in a strategic setting. We have evaluated the proposed approach under various simulation scenarios to judge on its usefulness and efficiency.

Journal ArticleDOI
TL;DR: A semantic government process management (SGPM) approach for the design and deployment of legally compliant government processes that is mainly articulated around an ontological framework, with a high level of abstraction, allowing the explicit representation of legal context associated with government processes.
Abstract: In the last years, governments embraced business process management practices to improve their interactions and services with various stakeholders such as citizens, businesses and other government agencies. Nevertheless, current government process management solutions are still very limited at the semantic level, leading to challenges in dealing with legal, social, organisational, political and economic constraints, collectively referred to as 'context'. In this paper, we introduce a semantic government process management (SGPM) approach for the design and deployment of legally compliant government processes. The developed solution is mainly articulated around an ontological framework, with a high level of abstraction, allowing the explicit representation of legal context associated with government processes. It is connected to a defined legal meta-model that acts as legal context extraction guidelines and knowledge source. Moreover, this framework is substantiated by a legal features model allowing the semantic representation of structural relationships and dependencies between processes, sub-processes, as well as activities. The ontological framework is implemented as software assets, using OWL-DL, that constitute the kernel from which BPEL executable government processes are automatically generated.