scispace - formally typeset
Search or ask a question

Showing papers on "Services computing published in 2019"


Journal ArticleDOI
TL;DR: In this article, a generic and resource oriented Stochastic timed Petri nets (STPN) simulation engine is presented for the analysis of service delivery system quality vs. resource provisioning.
Abstract: In many service delivery systems, the quantity of available resources is often a decisive factor of service quality. Resources can be personnel, offices, devices, supplies, and so on, depending on the nature of the services a system provides. Although service computing has been an active research topic for decades, general approaches that assess the impact of resource provisioning on service quality matrices in a rigorous way remain to be seen. Petri nets have been a popular formalism for modeling systems exhibiting behaviors of competition and concurrency for almost a half century. Stochastic timed Petri nets ( STPN ), an extension to regular Petri nets, are a powerful tool for system performance evaluation. However, we did not find any single existing STPN software tool that supports all timed transition firing policies and server types, not to mention resource provisioning and requirement analysis. This paper presents a generic and resource oriented STPN simulation engine that provides all critical features necessary for the analysis of service delivery system quality vs. resource provisioning. The power of the simulation system is illustrated by an application to emergency health care systems.

67 citations


Journal ArticleDOI
TL;DR: A new service composition scheme based on Deep Reinforcement Learning (DRL) for adaptive and large-scale service composition is proposed, more suitable for the partially observable service environment, making it work better for real-world settings.
Abstract: In a service-oriented system, simple services are combined to form value-added services to meet users’ complex requirements. As a result, service composition has become a common practice in service computing. With the rapid development of web service technology, a massive number of web services with the same functionality but different non-functional attributes (e.g., QoS) are emerging. The increasingly complex user requirements and the large number of services lead to a significant challenge to select the optimal services from numerous candidates to achieve an optimal composition. Meanwhile, web services accessible via computer networks are inherently dynamic and the environment of service composition is also complex and unstable. Thus, service composition solutions need to be adaptable to the dynamic environment. To address these key challenges, we propose a new service composition scheme based on Deep Reinforcement Learning (DRL) for adaptive and large-scale service composition. The proposed approach is more suitable for the partially observable service environment, making it work better for real-world settings. A recurrent neural network is adopted to improve reinforcement learning, which can predict the objective function and enhance the ability to express and generalize. In addition, we employ the heuristic behavior selection strategy, in which the state set is divided into the hidden and fully observable state sets, to perform the targeted behavior selection strategy when facing with different types of states. The experimental results justify the effectiveness and efficiency, scalability, and adaptability of our methods by showing obvious advantages in composition results and efficiency for service composition.

44 citations


Journal ArticleDOI
TL;DR: This paper integrates CBS composite system architecture analysis and reliability sensitivity analysis approaches and proposes an Architecture-based Reliability-sensitive Criticality Measure (or ARCMeas) method, and experimental results suggest the effectiveness of the approach.
Abstract: The widespread adoption of service computing allows software to be developed by outsourcing open cloud services (i.e., SOAP-based or RESTful Web APIs) through mashup or service composition techniques. Fault tolerance for the purpose of assuring the stable execution for cloud-based software (or CBS) application has attracted great attention in coping with a loosely coupled CBS operating under dynamic and uncertain running environments. It is too expensive to rent massively redundant cloud services for CBS fault tolerance application. To reduce budget but guarantee the effectiveness of CBS fault tolerance, identifying critical components within a CBS composite system is of significant importance. We integrate CBS composite system architecture analysis and reliability sensitivity analysis approaches and propose an Architecture-based Reliability-sensitive Criticality Measure (or ARCMeas) method in this paper. We verify ARCMeas application through a cost-effective fault tolerance CBS by presenting a particle swarm optimization (PSO)-based cost-effective fault tolerance strategy determination (or PSO-CFTD) algorithm. Experimental results suggest the effectiveness of the approach.

28 citations


Proceedings ArticleDOI
30 Mar 2019
TL;DR: This paper proposes an ensemble technique by adopting different machine learning algorithms namely K- Nearest Neighbor (KNN), Naive Bayes, Support Vector Machine (SVM) and Self-Organizing Map (SOM) to detect anomalous behavior of the data traffic in the SDN controller and shows that the ensemble method in machine learning provides better accuracy, detection rate, false alarm rate than the single learning algorithm.
Abstract: Software Defined Network (SDN) is a new approach to build architecture of computer networks that is dynamic, adaptable, manageable and low cost. The SDN paradigm offers virtualized network services, promoting architecture compatible with the current networks that use infrastructure-hosted services computing. In SDN, switches match for the incoming packets in the flow tables but do not process the packets. Denial of Services (DoS) are attacks in which the network is flooded by a large number of packets sent from machines committed. One class of such attacks is Distributed Denial of Service Attacks (DDoS), where several compromised machines aim simultaneously a target. In this paper, we propose an ensemble technique by adopting different machine learning (ML) algorithms namely K- Nearest Neighbor (KNN), Naive Bayes, Support Vector Machine(SVM) and Self-Organizing Map(SOM) to detect anomalous behavior of the data traffic in the SDN controller. Our experimental results show that the ensemble method in machine learning provides better accuracy, detection rate, false alarm rate than the single learning algorithm.

24 citations


Journal ArticleDOI
Jia Yu1, Rong Hao1
TL;DR: It is demonstrated the malicious cloud can generate a proof to pass the third party auditor's verification even if it does not store the user's whole file.
Abstract: Provable Data Possession is viewed as an important technique to check the integrity of the data stored in remote servers. Recently, a new provable data possession scheme [Secure and Efficient Privacy Preserving Provable Data Possession in Cloud Storage, IEEE Transactions on Services Computing, (2018) Doi: 10.1109/TSC.2018.2820713] was proposed. The authors claimed this scheme can guarantee the storage correction. In this paper, we show this scheme cannot satisfy this fundamental security. Specifically, we demonstrate the malicious cloud can generate a proof to pass the third party auditor's verification even if it does not store the user's whole file.

24 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed method efficiently alleviates the influence of the concerned QoS data ranges, and performs better than many state-of-the-art approaches with respect to accuracy.

20 citations


Journal ArticleDOI
TL;DR: This systematic literature review focuses explicitly on the heterogeneity of the aforementioned service types, targeting all possible combinations of the three service types.

19 citations


Journal ArticleDOI
TL;DR: OCCIware provides a unique and unified framework to manage OCCI artifacts and, at the same time, it represents a factory to build cloud domain-specific modeling frameworks where each framework targets a specific cloud domain.

19 citations


Journal ArticleDOI
TL;DR: A novel model based on Deep Learning named Recurrent Tensor Factorization (RTF), competent to predict the unknown Quality of Service (QoS) through comprehensive analysis is developed, and the results indicate that the proposed method obviously outperforms six state-of-the-art time-aware service recommendation methods.

19 citations


Journal ArticleDOI
TL;DR: This article investigates the use of cloud resources in automatic service workflow composition by proposing a system of two specialized web services that includes a web service that dynamically deploys virtual machines to carry out planning processes, thereby exhibiting artificial intelligence.
Abstract: Cloud computing is an information technology paradigm enabling companies to sell computing resources more dynamically. Software and hardware are now commodities leased on demand. Because computer systems leased from a cloud service provider, virtual machines, are typically connected to internet, they can host web services, which are frequently components of service oriented architectures (SOAs). Such architectures have recently been adopted in factory automation, as they allow systems to reach high levels of decentralization and loose-coupling. SOA-based Factory automation systems combine physical production equipment with web services that belong to the information processing (cyber) domain, and they are therefore highly cyber-physical. When some of the services are deployed on cloud resources, SOA-based factory automation systems can be classified cloud-based cyber-physical systems. Each service in such a system is typically able to perform rather simple, atomic operations, whereas achievement of complex goals requires that the services be composed to collaboratively carry out workflows. This article investigates the use of cloud resources in automatic service workflow composition. To facilitate the acquisition and utilization of cloud resources, a system of two specialized web services is proposed. The system includes a web service that dynamically deploys virtual machines to carry out planning processes, thereby exhibiting artificial intelligence. Finally, this paper demonstrates the integration of the system with a previously proposed semantic web service composition framework.

17 citations


Journal ArticleDOI
TL;DR: The motifs-based Dynamic Bayesian Networks (or m_DBNs) are presented to perform one-step-ahead online reliability time series prediction and a Convolutional Neural Networks (CNN)-based prediction approach is developed to deal with the big data challenges.
Abstract: Service computing is an emerging technology in System of Systems Engineering (SoS Engineering or SoSE), which regards a System as a Service, and aims at constructing a robust and value-added complex system by outsourcing external component systems through service composition. The burgeoning Big Service computing just covers the significant challenges in constructing and maintaining a stable service-oriented SoS. A service-oriented SoS runs under a volatile and uncertain environment. As a step toward big service, service fault tolerance (FT) can guarantee the run-time quality of a service-oriented SoS. To successfully deploy FT in an SoS, online reliability time series prediction, which aims at predicting the reliability in near future for a service-oriented SoS arises as a grand challenge in SoS research. In particular, we need to tackle a number of big data related issues given the large and fast increasing size of the historical data that will be used for prediction purpose. The decision-making of prediction solution space be more complex. To provide highly accurate prediction results, we tackle the prediction challenges by identifying the evolution regularities of component systems’ running states via different machine learning models. We present in this paper the motifs-based Dynamic Bayesian Networks (or m_DBNs) to perform one-step-ahead online reliability time series prediction. We also propose a multi-steps trajectory DBNs (or multi_DBNs) to further improve the accuracy of future reliability prediction. Finally, a Convolutional Neural Networks (CNN)-based prediction approach is developed to deal with the big data challenges. Extensive experiments conducted on real-world Web services demonstrate that our models outperform other well-known approaches consistently.

Journal ArticleDOI
01 Apr 2019
TL;DR: Satisfaction impacts the success of cloud services in terms of user generation and continuance, while switching barriers need to be in place to generate revenues, and recommendations are derived for three generic strategies that cloud providers can apply to become successful in their competitive market environment.
Abstract: How can cloud providers be successful? Severe competition and low up-front commitments create enormous challenges for providers of consumer cloud services when attempting to develop a sustainable market position. Emergent trends like consumerization lead to high growth rates and extend the reach of these services far into the enterprise sphere. Using a freemium model, many providers focus on establishing a large customer base quickly but fail to generate revenue streams in the long run. Others charge consumers early but do not reach their growth targets. Based on a representative sample of 596 actual cloud service users, the study examines how consumer cloud services can become self-sustainable on the basis of the user base and revenue streams they generate. The authors identify two mechanisms that influence the success of consumer cloud services, dedication- and constraint-based mechanisms, and show how they drive different elements of success. They find that satisfaction impacts the success of cloud services in terms of user generation and continuance, while switching barriers need to be in place to generate revenues. The results indicate that focusing on a single success element can be misleading and insufficient to understand the success of cloud services. The key findings are used to derive recommendations for three generic strategies that cloud providers can apply to become successful in their competitive market environment.

Journal ArticleDOI
TL;DR: Results show that a PCH/PCH’ heuristic finds better solutions than the existing machines’ configuration of Google traces; is suitable for large-sized instances of cloud services; and performs better than FF, FFD, and CPLEX in terms of overall penalties and net profits.
Abstract: This paper aims to optimize cloud services’ net profits and penalties with live placement of interdependent virtual machines (VMs). This optimization is a complex task as it is difficult to achieve a successful compromise between penalties and net profits on service level contracts. This paper studies this optimization problem to minimize services’ penalties and maximizing net profits while achieving live migrations of interdependent VMs. This VM's live placement optimization problem is a NP-hard problem with exponential running time. A mathematical model was designed and approximations were conducted with an efficient PCH/PCH’ heuristic. This Mixed Integer Non-Linear programming (MNLP) formulation and heuristic for cloud services was tested where the overall services’ penalty needs to be minimized, overall net profits have to be maximized, and where efficient live migrations of VMs is a concern. Simulation results show how cloud providers may live place VMs. Finally, our results show that a PCH/PCH’ heuristic: (i) finds better solutions than the existing machines’ configuration of Google traces; (ii) is suitable for large-sized instances of cloud services; (iii) performs better than FF, FFD, and CPLEX in terms of overall penalties and net profits; and (iv) runs in less than six minutes over the last day's data.

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The proposed smart classroom has a main vision to ease the implementation of learning activities within in-class and distance learning by integrating STE and SLS to provide a valid attendance record for student and education system regulation.
Abstract: Smart Classroom is defined as a concept that supposed to accommodate synchronous and asynchronous learning using technology that integrates traditional and distance learning to cover its learning services by using artificial intelligence-based technology. The smart classroom also utilizes the connection of smart devices as a support so that teachers and students can approach various learning styles, participate interactively and share content. While the current smart classroom system mostly provide classroom environment, the proposed design of the smart classroom in this research will be consisted of designing a smart classroom system that covers distance learning activities under Smart Learning System (SLS) and in-class learning activities in Smart Teaching Environment (STE).SLS facilitates distance learning by enabling interactive teaching, customized learning, learning analysis, and ease of document access. STE is a class environment that utilizes a variety of smart devices to support in-class learning activities. By using Service Computing System Engineering Framework, SLS and STE will be integrated to fulfill the need for learning activities alignment in our education system by creating service innovation of smart attendance. While in-class students’ attendance will be recorded by existing attendance recording methods, distance learning students’ attendance will be recorded by applying smart attendance in SLS utilizing artificial intelligence to trigger randomized pop-up quizzes for attendance evidence. The result of the randomized quiz submissions will be validated for attendance student report based on defined parameters of class attendance. This proposed design of the smart classroom will be evaluated by calculating the cohesion and coupling degrees as evaluation principles for service-oriented software design. In conclusion, the proposed smart classroom has a main vision to ease the implementation of learning activities within in-class and distance learning by integrating STE and SLS to provide a valid attendance record for student and education system regulation.

Journal ArticleDOI
TL;DR: A three-layer time-aware heterogeneous network model is proposed to quantify the evolution in the human service ecosystem and an exploratory empirical study is presented to uncover how human service providers and consumers develop their capability in service provision and orchestration.
Abstract: With the sweeping progress of service computing technology and crowdsourcing, individuals are offering their capability as human services online. Companies are orchestrating human services for complex problem-solving, resulting in the rapid growth of human service ecosystems nowadays. Considering the unique characteristics of human services, like capability growth and human-involving collaboration, it is essential to understand the patterns of the development and collaboration among human services. Therefore, this paper proposes a three-layer time-aware heterogeneous network model to quantify the evolution in the human service ecosystem. Based on the model, an exploratory empirical study is presented to uncover how human service providers and consumers develop their capability in service provision and orchestration, as well as how human services collaborate with each other over time. Insights from the emerging patterns open a gateway for further research to facilitate human service adoption, including human service composition recommendation, human skill expansion suggestion, and systematic mechanism design.

Journal ArticleDOI
TL;DR: This paper proposes a novel Web services similarity measure approach based on the notion of service composition context that measures the similarity between any two services using the PersonalRank and SimRank++ algorithms by taking the obtained context network as input.
Abstract: Web services similarity measure is an important problem in service computing area, which is the technological foundation of service substitution, service discovery, service recommendation, and so on. Most of the existing works use a static description of services to measure the similarity between two services. However, the interaction information of Web services recorded in the historical compositions is totally neglected. In this paper, we propose a novel Web services similarity measure approach based on the notion of service composition context. Specifically, we first introduce three types of parameter correlations between service input and output parameters. These correlations can be obtained from existing services compositions. Based on parameter correlations, we propose the service composition context model. Through the composition context of a service, the composition context network is constructed using contexts of all services. Then, we propose to measure the similarity between any two services using the PersonalRank and SimRank++ algorithms by taking the obtained context network as input. By experiments, we analyze the characteristics of our proposed method and demonstrate that its accuracy is much better than the state-of-the-art approaches.

Proceedings ArticleDOI
04 May 2019
TL;DR: This paper is revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service-oriented software systems for intelligent digital services and products.
Abstract: Enterprises are transforming their strategy, culture, processes, and their information systems to enlarge their Digitalization efforts or to approach for digital leadership. The Digital Transformation profoundly disrupts existing enterprises and economies. In current times, a lot of new business opportunities appeared using the potential of the Internet and related digital technologies: The Internet of Things, Services Computing, Cloud Computing, Artificial Intelligence, Big Data with Analytics, Mobile Systems, Collaboration Networks, and Cyber-Physical Systems. Digitization fosters the development of IT environments with many rather small and distributed structures, like the Internet of Things, Microservices, or other micro-granular elements. Architecting micro-granular structures have a substantial impact on architecting digital services and products. The change from a closed-world modeling perspective to more flexible Open World of living software and system architectures defines the context for flexible and evolutionary software approaches, which are essential to enable the Digital Transformation. In this paper, we are revealing multiple perspectives of digital enterprise architecture and decisions to effectively support value and service-oriented software systems for intelligent digital services and products.

Journal ArticleDOI
TL;DR: A new service composition method, called Improved Self-organising neural network Method for Web Service Composition, is proposed to achieve QoS-aware Web service combination, according to using the clustering technology.
Abstract: How to quickly combine various Web services to support cross-organisational business processes is the key technical problem in service computing. Because of the changeability of QoS of Web services...

Journal ArticleDOI
01 Sep 2019
TL;DR: This work takes into consideration the dataset features in the QoS-based service recommendation process and proposes two approaches for data mining service recommendations, which use decomposition technique to identify latent features of the input dataset and then recommend services by exploiting these latent variables.
Abstract: Quality of service (QoS)-based web service selection has been studied in the service computing community for some time. However, characteristics of the input dataset that is going to be processed by the web service are not usually considered in the selection process, even though they might have impact on QoS values of the service, e.g. latency on processing a bigger dataset is higher than that on a smaller dataset, one service takes longer time to process a certain dataset than another service. To address this issue, in this work, we take into consideration the dataset features in the QoS-based service recommendation process and we focus on data mining services because their QoS values could be highly dependent on dataset features. We propose two approaches for data mining service recommendations and compare their performances. In the first approach, we use a meta-learning algorithm to incorporate dataset features in the recommendation process and study the use of different machine learning algorithms (both classification models and regression models) as meta-learners in recommending data mining services for the given dataset. We also investigate the impact of the number of dataset features on the performance of the meta-learners. In the second approach, we propose a novel technique of using factor analysis for web service recommendation. We use decomposition technique to identify latent features of the input dataset and then recommend services by exploiting these latent variables. Our proposed approach of web service recommendation based on latent features was shown to be a more robust model with an accuracy of 85% compared to meta-feature-based recommendation.

Proceedings ArticleDOI
08 Jul 2019
TL;DR: Ten areas of goal and constraint resetting relevant to Regtech software development and traditional services computing approaches are discussed, demonstrating the need for services computing scholars to engage in developing some of the most critical software services of the authors' age.
Abstract: As financial technologies (Fintech) pioneers seek to disintermediate the world's traditional banking sector's intermediary role, new regulatory technologies (Regtech) will be required to guarantee markets can be trusted, contract laws are adhered to and compliance can be verified through transparent processes. Services computing researchers will play an important role in advancing Regtech, but they must add to their repertoire additional knowledge of financial principles, an understanding of common Fintech pathologies that may be exploited by bad actors, and new thinking in regard to protecting customer data across multiple legal jurisdictions and the related compliance of boundary crossing banking-related algorithms. This paper uses supply chain finance as a use case to demonstrate details of this needed new repertoire. Ten areas of goal and constraint resetting relevant to Regtech software development and traditional services computing approaches are discussed. At stake is the need for services computing scholars to engage in developing some of the most critical software services of our age - those that can help to maintain and advance our market-based societies.

Journal ArticleDOI
01 Mar 2019
TL;DR: A generic reference model of services computing systems platform based on meta-analysis technique that can be used as a guide to build the platform is proposed.
Abstract: The development of services computing systems requires an environment that supports service-oriented application development approach. The organized environment is referred to as platform in this paper. The lack of such generic platform is a challenge in the field. A reference model becomes a prerequisite for developing such platform in order to optimize the computing resources. This paper proposes a reference model of services computing systems platform based on the literature studies using a meta-analysis technique. The paper demonstrates a reference model development using the systematic meta-analysis technique to identify the required components of the platform and to examine the interaction between the components. Statistical tests are conducted to show the correlation between the components and to evaluate the proposed reference model. This study also discusses the different functions and areas for each layer of the proposed model. The contribution of this paper is a generic reference model of services computing systems platform based on meta-analysis technique that can be used as a guide to build the platform.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: Wang et al. as discussed by the authors proposed a hybrid collaborative filtering with attention CNN model for web service recommendation by combining collaborative filtering and attention CNN, which could be used to capture the complicated mashup-service relationships.
Abstract: Service-oriented computing has significantly affect the software development in Web 2.0 era, computing diagram and architectures based on Web services were comprehensively developed. As the Web services were continuously increasing, it becomes more difficult for users to screen out Web services that meet their needs and with good quality while facing with a large amount of Web Services. Therefore, how to recommend the best Web services for users has become a hot research direction in the domain of service computing. Many machine-learning approaches, especially CF (collaborative filtering) models based on matrix factorization, has been widely used in service recommendation tasks. However, it is tough for CF models to deal with sparse invocation matrix when capturing the complicate interaction relation between mashups and services, which would result in a bad performance. To solve this problem, we proposed a hybrid collaborative filtering with attention CNN model for web service recommendation by combining collaborative filtering and attention CNN. The mashup-service invocation matrix as well as attention-based CNN are seamlessly integrated into deep neural nets, which could be used to capture the complicated mashup-service relationships. The experiment result we gain could validate that our proposed models performs better than several state-of-the-art approaches in service recommendation tasks, and further demonstrate the effectiveness of our models.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate mechanisms for supporting the evolution of digital enterprise architectures with user-friendly methods and instruments of interaction, visualization, and intelligent decision management during the exploration of multiple and interconnected perspectives by an architecture management cockpit.

Journal ArticleDOI
TL;DR: Bouncer works by ensuring that cloud services do not exceed the cloud infrastructure’s threshold capacity, and significantly outperforms the conventional service admission control schemes, which are still state of the art.
Abstract: Cloud computing is a paradigm that ensures the flexible, convenient and on-demand provisioning of a shared pool of configurable network and computing resources. Its services can be offered by either private or public infrastructures, depending on who owns the operational infrastructure. Much research has been conducted to improve a cloud’s resource provisioning techniques. Unfortunately, sometimes an abrupt increase in the demand for cloud services results in resource shortages affecting both providers and consumers. This uncertainty of resource demands by users can lead to catastrophic failures of cloud systems, thus reducing the number of accepted service requests. In this paper, we present Bouncer—a workload admission control scheme for cloud services. Bouncer works by ensuring that cloud services do not exceed the cloud infrastructure’s threshold capacity. By adopting an application-aware approach, we implemented Bouncer on software-defined network (SDN) infrastructure. Furthermore, we conduct an extensive study to evaluate our framework’s performance. Our evaluation shows that Bouncer significantly outperforms the conventional service admission control schemes, which are still state of the art.

Journal ArticleDOI
01 Jun 2019
TL;DR: This paper proposes a new strategy HPC2-ARS to enable streaming services on HPC platforms, which includes a three-tier high-performance cloud computing (HPC2) platform and a novel autonomous resource-scheduling (ARS) framework.
Abstract: There are very-high-volume streaming data in the cyber world today. With the popularization of 5G technology, the streaming Big Data grows larger. Moreover, it needs to be analyzed in real time. We propose a new strategy HPC2-ARS to enable streaming services on HPC platforms. This strategy includes a three-tier high-performance cloud computing (HPC2) platform and a novel autonomous resource-scheduling (ARS) framework. The HPC2 platform is our de facto base platform for research on streaming service. It has three components: Tianhe-2 high-performance computer, custom OpenStack cloud computing software, and Apache Storm stream data analytic system. Our ARS framework ensures real-time response on unpredictable and fluctuating stream, especially streaming Big Data in the 5G era. This strategy addresses an essential problem in the convergence of HPC Cloud, Big Data, and streaming service. Specifically, Our ARS framework provides theoretical and practical solutions for resource provisioning, placement, and scheduling optimization. Extensive experiments have validated the effectiveness of the proposed strategy.

Journal ArticleDOI
TL;DR: This paper proposes an integrated, real-time and dynamic control mechanism for large-scale cloud service systems and their load balancing through combining supermarket models with not only work stealing models but also scheduling of public reserved resource.
Abstract: Service computing is an emerging and distributed computing mode in cloud service systems, and has become an interesting research direction for both academia and industry. Note that the cloud service systems always display new characteristics, such as stochasticity, large scale, loose coupling, concurrency, non-homogeneity and heterogeneity, thus their load balancing investigation has been more interesting, difficult and challenging until now. By using resource management and job scheduling, this paper proposes an integrated, real-time and dynamic control mechanism for large-scale cloud service systems and their load balancing through combining supermarket models with not only work stealing models but also scheduling of public reserved resource. To this end, this paper provides a novel stochastic model with weak interactions by means of nonlinear Markov processes. To overcome theoretical difficulties growing out of the state explosion in high-dimensional stochastic systems, this paper applies the mean-field theory to develop a macro computational technique in terms of an infinite-dimensional system of mean-field equations. Furthermore, this paper proves the asymptotic independence of the large-scale cloud service system, and show how to compute the fixed point by virtue of an infinite-dimensional system of nonlinear equations. Based on the fixed point, this paper provides effective numerical computation for performance analysis of this system under a high approximate precision. Therefore, we hope that the methodology and results given in this paper can be applicable to the study of more general large-scale cloud service systems.

Proceedings ArticleDOI
22 Jul 2019
TL;DR: The proposed IoT service model can expose the device profile to the user and incorporate relevant context information with the things data and also associates the context with the "things" output, which helps in extracting relevant information from the " things" data.
Abstract: Leveraging the benefits of service computing technologies for Internet of Things (IoT) can help in rapid system development, composition and deployment. But due to the massive scale, computational and communication constraints, existing software service models cannot be directly applied for IoT based systems. Service discovery and composition mechanism need to be decentralized unlike majority of other service models. Moreover, IoT services’ interfaces require to be light weight and able to expose the device profile for seamless discovery onto the IoT based system infrastructure. In addition to this, the "things" data should be associated with its present context. To address these issues, this paper proposes a formal model for IoT services. The service model includes the physical property of "things" and exposes it to the user. It also associates the context with the "things" output, which in turn helps in extracting relevant information from the "things" data. To evaluate our IoT service model, a weather monitoring system and its associated services are implemented using node.js [31]. The service data is mapped to SSN ontology for generating context-rich RDF data. This way, the proposed IoT service model can expose the device profile to the user and incorporate relevant context information with the things data.

Journal ArticleDOI
TL;DR: An enrichment of the engineering methodologies for the service-oriented system from the perspective of services computing with results that both perceived ease of use and perceived usefulness deliver good results almost for the entire stages of the proposed framework.
Abstract: The gap between business services and IT services becomes a major concern in services computing. As an approach for service-based IT solution, services computing systems are promised to be able to bridge the gap between these services. The implementation will require an engineering framework as a guide to building the systems. The framework needs to be evaluated to provide important feedback to the framework development. This paper outlines the evaluation of SCSE framework through an acceptance model. The study develops an acceptance model based on the experiences of a group of engineers after using the framework to build smart campus services systems. A survey involving 54 systems engineers with various engineering backgrounds was conducted to assess the experiences of the engineers in using the framework. The results of the acceptance model show that both perceived ease of use, represented by the level of agreement (υ1) and perceived usefulness, represented by the level of importance (υ2) deliver good results almost for the entire stages of the proposed framework. In addition, the user experiences of using the proposed framework are in the acceptable levels. The contribution of this paper is an enrichment of the engineering methodologies for the service-oriented system from the perspective of services computing.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: This paper proposes a system design for learning process models in IoT edge based on the requirements of some real-time process services and presents the details of the solution and preliminary results on a simulated IoT network show that the method can discover real- time process modelsIn less than a second.
Abstract: Process models as knowledge graph representation have been widely used in various domains to create products and deliver services. Although different process model discovery approaches have been proposed in recent years, few of them are designed for distributed computing environments. Specifically, none of them has been studied in the emerging edge computing application scenarios. In this paper, based on the requirements of some real-time process services, we propose a system design for learning process models in IoT edge. We present the details of our solution and our preliminary results on a simulated IoT network show that our method can discover real-time process models in less than a second.

Book ChapterDOI
09 Sep 2019
TL;DR: This work proposes a novel, holistic approach for architecturing elastic edge storage services, featuring three aspects, namely, data/system characterization, system operations and data processing utilities, and presents seven engineering principles for the architecture design of edge data services.
Abstract: In the IoT era, a massive number of smart sensors produce a variety of data at unprecedented scale. Edge storage has limited capacities posing a crucial challenge for maintaining only the most relevant IoT data for edge analytics. Currently, this problem is addressed mostly considering traditional cloud-based database perspectives, including storage optimization and resource elasticity, while separately investigating data analytics approaches and system operations. For better support of future edge analytics, in this work, we propose a novel, holistic approach for architecturing elastic edge storage services, featuring three aspects, namely, (i) data/system characterization (e.g., metrics, key properties), (ii) system operations (e.g., filtering, sampling), and (iii) data processing utilities (e.g., recovery, prediction). In this regard, we present seven engineering principles for the architecture design of edge data services.