scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2021"


Journal ArticleDOI
TL;DR: This article reformulates the microservice coordination problem using Markov decision process framework and then proposes a reinforcement learning-based online micro service coordination algorithm to learn the optimal strategy, which proves that the offline algorithm can find the optimal solution while the online algorithm can achieve near-optimal performance.
Abstract: As an emerging service architecture, microservice enables decomposition of a monolithic web service into a set of independent lightweight services which can be executed independently. With mobile edge computing, microservices can be further deployed in edge clouds dynamically, launched quickly, and migrated across edge clouds easily, providing better services for users in proximity. However, the user mobility can result in frequent switch of nearby edge clouds, which increases the service delay when users move away from their serving edge clouds. To address this issue, this article investigates microservice coordination among edge clouds to enable seamless and real-time responses to service requests from mobile users. The objective of this work is to devise the optimal microservice coordination scheme which can reduce the overall service delay with low costs. To this end, we first propose a dynamic programming-based offline microservice coordination algorithm, that can achieve the globally optimal performance. However, the offline algorithm heavily relies on the availability of the prior information such as computation request arrivals, time-varying channel conditions and edge cloud's computation capabilities required, which is hard to be obtained. Therefore, we reformulate the microservice coordination problem using Markov decision process framework and then propose a reinforcement learning-based online microservice coordination algorithm to learn the optimal strategy. Theoretical analysis proves that the offline algorithm can find the optimal solution while the online algorithm can achieve near-optimal performance. Furthermore, based on two real-world datasets, i.e., the Telecom's base station dataset and Taxi Track dataset from Shanghai, experiments are conducted. The experimental results demonstrate that the proposed online algorithm outperforms existing algorithms in terms of service delay and migration costs, and the achieved performance is close to the optimal performance obtained by the offline algorithm.

191 citations


Journal ArticleDOI
TL;DR: The European Open Science Cloud (EOSC) portal has been used for the WeNMR project as mentioned in this paper since 2010 and has implemented numerous web-based services to facilitate the use of advanced computational tools by researchers in the field.
Abstract: Structural biology aims at characterizing the structural and dynamic properties of biological macromolecules with atomic details. Gaining insight into three dimensional structures of biomolecules and their interactions is critical for understanding the vast majority of cellular processes, with direct applications in health and food sciences. Since 2010, the WeNMR project (www.wenmr.eu) has implemented numerous web-based services to facilitate the use of advanced computational tools by researchers in the field, using the high throughput computing infrastructure provided by EGI. These services have been further developed in subsequent initiatives under H2020 projects and are now operating as Thematic Services in the European Open Science Cloud (EOSC) portal (www.eosc-portal.eu), sending >12 millions of jobs and using around 4000 CPU-years per year. Here we review 10 years of successful e-infrastructure solutions serving a large worldwide community of over 23,000 users to date, providing them with user-friendly, web-based solutions that run complex workflows in structural biology. The current set of active WeNMR portals are described, together with the complex backend machinery that allows distributed computing resources to be harvested efficiently.

173 citations


Journal ArticleDOI
TL;DR: A new deep CF model for service recommendation, named location-aware deep CF (LDCF), which can not only learn the high-dimensional and nonlinear interactions between users and services but also significantly alleviate the data sparsity problem.
Abstract: With the widespread application of service-oriented architecture (SOA), a flood of similarly functioning services have been deployed online. How to recommend services to users to meet their individual needs becomes the key issue in service recommendation. In recent years, methods based on collaborative filtering (CF) have been widely proposed for service recommendation. However, traditional CF typically exploits only low-dimensional and linear interactions between users and services and is challenged by the problem of data sparsity in the real world. To address these issues, inspired by deep learning, this article proposes a new deep CF model for service recommendation, named location-aware deep CF (LDCF). This model offers the following innovations: 1) the location features are mapped into high-dimensional dense embedding vectors; 2) the multilayer-perceptron (MLP) captures the high-dimensional and nonlinear characteristics; and 3) the similarity adaptive corrector (AC) is first embedded in the output layer to correct the predictive quality of service. Equipped with these, LDCF can not only learn the high-dimensional and nonlinear interactions between users and services but also significantly alleviate the data sparsity problem. Through substantial experiments conducted on a real-world Web service dataset, results indicate that LDCF’s recommendation performance obviously outperforms nine state-of-the-art service recommendation methods.

119 citations


Journal ArticleDOI
TL;DR: CNMF is proposed, a covering-based quality prediction method for Web services via neighborhood-aware matrix factorization that significantly outperforms eight existing quality prediction methods, including two state-of-the-art methods that also utilize neighborhood information with MF.
Abstract: The number of Web services on the Internet has been growing rapidly. This has made it increasingly difficult for users to find the right services from a large number of functionally equivalent candidate services. Inspecting every Web service for their quality value is impractical because it is very resource consuming. Therefore, the problem of quality prediction for Web services has attracted a lot of attention in the past several years, with a focus on the application of the Matrix Factorization (MF) technique. Recently, researchers have started to employ user similarity to improve MF-based prediction methods for Web services. However, none of the existing methods has properly and systematically addressed two of the major issues: 1) retrieving appropriate neighborhood information, i.e., similar users and services; 2) utilizing full neighborhood information, i.e., both users’ and services’ neighborhood information. In this paper, we propose CNMF, a c overing-based quality prediction method for Web services via n eighborhood-aware m atrix f actorization. The novelty of CNMF is twofold. First, it employs a covering-based clustering method to find similar users and services, which does not require the number of clusters and cluster centroids to be prespecified. Second, it utilizes neighborhood information on both users and services to improve the prediction accuracy. The results of experiments conducted on a real-world dataset containing 1,974,675 Web service invocation records demonstrate that CNMF significantly outperforms eight existing quality prediction methods, including two state-of-the-art methods that also utilize neighborhood information with MF.

114 citations


Book
12 Jun 2021
TL;DR: The author argues for a disciplined, engineering approach to the creation of business critical Web based systems, populated by a set of framework activities that occur for all business-critical WebApp projects, regardless of the size or complexity.
Abstract: The author argues for a disciplined, engineering approach to the creation of business critical Web based systems. Web engineering is an adaptable, incremental (evolutionary) process populated by a set of framework activities that occur for all business-critical WebApp projects, regardless of the size or complexity. The following framework activities might be considered for Web engineering work: formulation, planning, analysis, modeling, page generation and testing, and customer evaluation. These activities are applied iteratively as a Web based system evolves. Project management for Web engineering is governed by the unique characteristics of WebApp projects. These characteristics precipitate questions whose answers can make or break a project.

102 citations


Journal ArticleDOI
TL;DR: This paper goes through the Alexa.com top 4000 most popular sites to identify precisely 500 websites claiming to provide a REST web service API, and analyzes these 500 APIs for key technical features, degree of compliance with REST architectural principles, and for adherence to best practices.
Abstract: Businesses are increasingly deploying their services on the web, in the form of web applications, SOAP services, message-based services, and, more recently, REST services. Although the movement towards REST is widely recognized, there is not much concrete information regarding the technical features being used in the field, such as typical data formats, how HTTP verbs are being used, or typical URI structures, just to name a few. In this paper, we go through the Alexa.com top 4000 most popular sites to identify precisely 500 websites claiming to provide a REST web service API. We analyze these 500 APIs for key technical features, degree of compliance with REST architectural principles (e.g., resource addressability), and for adherence to best practices (e.g., API versioning). We observed several trends (e.g., widespread JSON support, software-generated documentation), but, at the same time, high diversity in services, including differences in adherence to best practices, with only 0.8 percent of services strictly complying with all REST principles. Our results can help practitioners evolve guidelines and standards for designing higher quality services and also understand deficiencies in currently deployed services. Researchers may also benefit from the identification of key research areas, contributing to the deployment of more reliable services.

87 citations


Journal ArticleDOI
TL;DR: Given a service composition and a set of candidate services, Q2C first preprocesses the quality correlations among the candidate services and then constructs a quality correlation index graph to enable efficient queries for quality correlations.
Abstract: As enterprises around the globe embrace globalization, strategic alliances among enterprises have become an important means to gain competitive advantages. Enterprises cooperate to improve the quality or lower the prices of their services, which introduce quality correlations, i.e., the quality of a service is associated with other services. Existing approaches for service composition have not fully and systematically considered the quality correlations between services. In this paper, we propose a novel approach named Q2C ( Q uery of Q uality C orrelation) to systematically model quality correlations and enable efficient queries of quality correlations for service compositions. Given a service composition and a set of candidate services, Q2C first preprocesses the quality correlations among the candidate services and then constructs a quality correlation index graph to enable efficient queries for quality correlations. Extensive experiments are conducted on a real-world web service dataset to demonstrate the effectiveness and efficiency of Q2C.

86 citations


Journal ArticleDOI
TL;DR: An overview of internet resources enabling and supporting chemical biology and early drug discovery with a main emphasis on web servers dedicated to virtual ligand screening and small-molecule docking is provided.
Abstract: The interplay between life sciences and advancing technology drives a continuous cycle of chemical data growth; these data are most often stored in open or partially open databases. In parallel, many different types of algorithms are being developed to manipulate these chemical objects and associated bioactivity data. Virtual screening methods are among the most popular computational approaches in pharmaceutical research. Today, user-friendly web-based tools are available to help scientists perform virtual screening experiments. This article provides an overview of internet resources enabling and supporting chemical biology and early drug discovery with a main emphasis on web servers dedicated to virtual ligand screening and small-molecule docking. This survey first introduces some key concepts and then presents recent and easily accessible virtual screening and related target-fishing tools as well as briefly discusses case studies enabled by some of these web services. Notwithstanding further improvements, already available web-based tools not only contribute to the design of bioactive molecules and assist drug repositioning but also help to generate new ideas and explore different hypotheses in a timely fashion while contributing to teaching in the field of drug development.

72 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive review of various Load Balancing techniques in a static, dynamic, and nature-inspired cloud environment to address the Data Center Response Time and overall performance.

54 citations


Journal ArticleDOI
TL;DR: This work proposes a novel method to predict QoS values based on factorization machine, which leverages not only QoS information of users and services but also the user and service neighbor's information and achieves higher prediction accuracy than other QoS prediction methods.
Abstract: With the prevalence of web services, a large number of similar web services are provided by different providers. To select the optimal service among these service candidates, Quality of Service (QoS), representing the non-functional characteristics, plays an important role. To obtain the QoS values of web services, a number of web service QoS prediction methods have been proposed. Collaborative web service QoS prediction is one of the most popular approaches. Based on the historical QoS data, collaborative QoS prediction methods employ memory-based collaborative filtering (CF), model-based CF, or their hybrids to predict QoS values. However, these methods usually only consider the QoS information of similar users and services, neglecting the correlation between them. To enhance the prediction accuracy, we propose a novel method to predict QoS values based on factorization machine, which leverages not only QoS information of users and services but also the user and service neighbor’s information. To evaluate our approach, we conduct experiments on a large-scale real-world dataset with 1,974,675 web service invocations. The experiment results show that our approach achieves higher prediction accuracy than other QoS prediction methods.

52 citations


Journal ArticleDOI
TL;DR: A similarity-maintaining privacy preservation (SPP) strategy is designed, which aims to protect the user’s privacy and maintain the utility of user data in the meanwhile, and a location-aware low-rank matrix factorization (LLMF) algorithm is proposed.
Abstract: Web service recommendation plays an important role in building service-oriented systems. QoS-based Web service recommendation has recently gained much attention for providing a promising way to help users find high-quality services. To accurately predict the QoS values of candidate Web services, Web service recommendation systems usually need to collect historical QoS data from users, which will potentially pose a threat to the user’s privacy. However, how to simultaneously protect user’s privacy and make an accurate prediction has not been well studied. By taking these two aspects into consideration, we propose a novel QoS prediction approach for Web service recommendation in this paper. Specifically, we first design a similarity-maintaining privacy preservation (SPP) strategy, which aims to protect the user’s privacy and maintain the utility of user data in the meanwhile. Then, we propose a location-aware low-rank matrix factorization (LLMF) algorithm, which employs the $L_1$ L 1 -norm low-rank matrix factorization to improve the model’s robustness, and combines the matrix factorization model with two kinds of location information (continent, longitude and latitude) in the prediction process. Experimental results on two publicly available real-world Web service QoS datasets demonstrate the effectiveness of our privacy-preserving QoS prediction approach.

Journal ArticleDOI
TL;DR: A Location-based Matrix Factorization using a Preference Propagation method (LMF-PP) to address the cold start problem and shows better performance than existing approaches in cold start environments as well as in warm start environments.
Abstract: Many web-based software systems have been developed in the form of composite services. It is important to accurately predict the Quality of Service (QoS) value of atomic web services because the performance of such composite services depends greatly on the performance of the atomic web service adopted. In recent years, collaborative filtering based methods for predicting the web service QoS values have been proposed. However, they are mainly faced with a cold start problem that is difficult to make reliable prediction due to highly sparse historical data, newly introduced users and web services, and the existing work only deals with the case of newly introduced users. In this article, we propose a Location-based Matrix Factorization using a Preference Propagation method (LMF-PP) to address the cold start problem. LMF-PP fuses invocation and neighborhood similarity, and then the fused similarity is utilized by preference propagation. LMF-PP is compared with existing approaches on the real world dataset. Based on the experimental results, LMF-PP shows better performance than existing approaches in cold start environments as well as in warm start environments.

Journal ArticleDOI
TL;DR: This paper develops a privacy preserving protocol to predict missing QoS values and thereby providing Web service recommendations based on past QoS experiences and locations of users that is able to achieve user privacy by means of encrypting the QoS and location as well as to select suitable Web services for users without disclosing any private information.
Abstract: The personalized Web service recommendation based on Quality of Service (QoS) is gaining increasing popularity due to its promising ability to help users find high quality services. Studies suggest that it is beneficial to use Collaborative Filtering (CF)-based techniques to facilitate Web service recommendations which can achieve high accuracy in predicting the QoS for unobserved Web services. With the QoS, location of users and Web services has been another significant factor in predicting the QoS values. The more factors that are available to the service providers, the more accurate predictions can be generated. However these factors are privacy sensitive and therefore it is risky to disclose them to any third party service provider. To address this challenge, in this paper we develop a privacy preserving protocol to predict missing QoS values and thereby providing Web service recommendations based on past QoS experiences and locations of users. Our protocol is able to achieve user privacy by means of encrypting the QoS and location as well as to select suitable Web services for users without disclosing any private information. We conduct extensive experimental analysis on publicly available data sets and prove that our method is both secure and practical.

Journal ArticleDOI
TL;DR: A novel hybrid algorithm is proposed to address the personalized recommendation for manufacturing service composition (MSC) by comprehensively considering the QoS objective attributes and customer preference attributes and recommends the most suitable solutions for the target customer through the ranking ofCustomer preference attributes.

Journal ArticleDOI
TL;DR: This paper develops a new PSO-based algorithm to provide a set of trade-off solutions for Web Service Location Allocation Problem and shows that the new algorithm can provide a more diverse range of solutions than the compared three well known multi-objective optimization algorithms.
Abstract: With the ever increasing number of functionally similar web services being available on the Internet, the market competition is becoming intense. Web service providers (WSPs) realize that good Quality of Service (QoS) is a key of business success and low network latency is a critical measurement of good QoS. Because network latency is related to location, a straightforward way to reduce network latency is to allocate services to proper locations. However, Web Service Location Allocation Problem (WSLAP) is a challenging task since there are multiple objectives potentially conflicting with each other and the solution search space has a combinatorial nature. In this paper, we consider minimizing the network latency and total cost simultaneously and model the WSLAP as a multi-objective optimization problem. We develop a new PSO-based algorithm to provide a set of trade-off solutions. The results show that the new algorithm can provide a more diverse range of solutions than the compared three well known multi-objective optimization algorithms. Moreover, the new algorithm performs better especially on large problems.

Journal ArticleDOI
TL;DR: An improved PROMETHEE method is applied to most eligible web services and Maximizing Deviation Method based hybrid weight evaluation mechanism is adopted and top-k web services matching closely with the QoS requirements of the end user are selected.
Abstract: Selection of an appropriate web service fulfilling the requirements of the end user is a challenging task. Most of the existing systems use Quality of Service (QoS) as predominant parameter for web service selection, without any preprocessing or filtering. These systems consider all of the candidate web services during selection process and require unnecessary processing of those web services which are far below the expectations of the end user. In this work, an approach for web service selection based on QoS parameters is proposed. The proposed method starts with prefiltering of candidate web services using classification technique. An improved PROMETHEE method, we call it as PROMETHEE Plus, is applied to most eligible web services and Maximizing Deviation Method based hybrid weight evaluation mechanism is adopted. Top-k web services matching closely with the QoS requirements of the end user are selected. Experiments on the dataset of real world web services are conducted. Experimental results show that our approach performs better in terms of end user satisfaction and efficiency with reference to the existing similar approaches.

Journal ArticleDOI
21 Feb 2021
TL;DR: An architecture that collects source data and in a supervised way performs the forecasting of the time series of the page views of the Wikipedia page views is introduced, representing a significant step forward in the field of time series prediction for web traffic forecasting.
Abstract: Evaluating web traffic on a web server is highly critical for web service providers since, without a proper demand forecast, customers could have lengthy waiting times and abandon that website. However, this is a challenging task since it requires making reliable predictions based on the arbitrary nature of human behavior. We introduce an architecture that collects source data and in a supervised way performs the forecasting of the time series of the page views. Based on the Wikipedia page views dataset proposed in a competition by Kaggle in 2017, we created an updated version of it for the years 2018–2020. This dataset is processed and the features and hidden patterns in data are obtained for later designing an advanced version of a recurrent neural network called Long Short-Term Memory. This AI model is distributed training, according to the paradigm called data parallelism and using the Downpour training strategy. Predictions made for the seven dominant languages in the dataset are accurate with loss function and measurement error in reasonable ranges. Despite the fact that the analyzed time series have fairly bad patterns of seasonality and trend, the predictions have been quite good, evidencing that an analysis of the hidden patterns and the features extraction before the design of the AI model enhances the model accuracy. In addition, the improvement of the accuracy of the model with the distributed training is remarkable. Since the task of predicting web traffic in as precise quantities as possible requires large datasets, we designed a forecasting system to be accurate despite having limited data in the dataset. We tested the proposed model on the new Wikipedia page views dataset we created and obtained a highly accurate prediction; actually, the mean absolute error of predictions regarding the original one on average is below 30. This represents a significant step forward in the field of time series prediction for web traffic forecasting.

Journal ArticleDOI
TL;DR: EvoMaster is applied to eight representational state transfer application programming interfaces to show how the tool can be used to automatically generate test cases that can find several bugs, even when using a naive black-box approach.
Abstract: We apply EvoMaster to eight representational state transfer application programming interfaces; show how the tool can be used to automatically generate test cases that can find several bugs, even when using a naive black-box approach; and discuss challenges that must be taken into account.

Journal ArticleDOI
TL;DR: A multiplex interaction-oriented service recommendation approach, named MISR, which incorporates three types of interactions between services and mashups into a deep neural network, which outperforms several state-of-the-art approaches regarding commonly used evaluation metrics.
Abstract: As service-oriented computing (SOC) technologies gradually mature, developing service-based systems (such as mashups) has become increasingly popular in recent years. Faced with the rapidly increasing number of Web services, recommending appropriate component services for developers on demand is a vital issue in the development of mashups. In particular, since a new mashup to develop contains no component services, it is a new “user” to a service recommender system. To address this new “user” cold-start problem, we propose a multiplex interaction-oriented service recommendation approach, named MISR, which incorporates three types of interactions between services and mashups into a deep neural network. In this article, we utilize the powerful representation learning abilities provided by deep learning to extract hidden structures and features from various types of interactions between mashups and services. Experiments conducted on a real-world dataset from ProgrammableWeb show that MISR outperforms several state-of-the-art approaches regarding commonly used evaluation metrics.

Journal ArticleDOI
TL;DR: This paper obtains the latent topics of all tags as well as the description documents for mashups and APIs based on a novel probabilistic topic model and proposes a novel topic-sensitive approach based on the Factorization Machines for mashup tag recommendation.
Abstract: Tagging systems have been widely used as a major way of managing Web service resources. Many portals such as ProgrammableWeb and BioCatalogue allow users to create manual tags annotating Web services and their compositions (e.g., mashups). This is extremely helpful for managing and retrieving enormous Web service data. In the past few years, many tag recommendation approaches have been proposed for Web services that contain few or no tags. Most of them only exploit the textual content or tag service matrix information. Sometimes those approaches suffer from the data sparsity problem, especially when Web services have only few tags or their auxiliary textual contents are hard to be obtained. In real world, a plenty of relationships are available in recommendation systems, e.g., the composition relationship between services and the annotation relationship between mashups and tags. These multi-relational data can be utilized as additional features to improve the recommendation performance. In this paper, we exploit various types of relationships as features and propose a novel topic-sensitive approach based on the Factorization Machines for mashup tag recommendation. Factorization Machines is utilized to model the pair-wise interactions between all features and predict adequate tags for mashups. In this approach, we first obtain the latent topics of all tags as well as the description documents for mashups and APIs based on a novel probabilistic topic model. Then, a multi-relational network by mining various relationships from the Web service data is constructed. Various auxiliary informations are subsequently extracted from the network to train the Factorization Machines. The proposed model is evaluated on three real-world datasets and the experimental results show that it outperforms several state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a more efficient neighbor selection process and multi-pheromone distribution method named Enhanced Flying Ant Colony Optimization (EFACO) to solve the WSC problem.
Abstract: Web Service Composition (WSC) can be defined as the problem of consolidating the services regarding the complex user requirements. These requirements can be represented as a workflow. This workflow consists of a set of abstract task sequence where each sub-task represents a definition of some user requirements. In this work, we propose a more efficient neighboring selection process and multi-pheromone distribution method named Enhanced Flying Ant Colony Optimization (EFACO) to solve this problem. The WSC problem has a challenging issue, where the optimization algorithms search the best combination of web services to achieve the functionality of the workflow’s tasks. We aim to improve the computation complexity of the Flying Ant Colony Optimization (FACO) algorithm by introducing three different enhancements. We analyze the performance of EFACO against six of existing algorithms and present a summary of our conclusions.

Journal ArticleDOI
TL;DR: A web service framework based on SWMM (WEB-SWMM), which can provide real-time computing services for urban water management and is applicable to most existing hydrological models.
Abstract: Storm Water Management Model (SWMM), a hydrodynamic rainfall-runoff and urban drainage simulation model, is widely applied in planning, analysis, and design. It is worth mentioning that the hydrological and hydrodynamic simulation functions of SWMM can also provide decision support for real-time urban stormwater management. However, it remains challenging to directly apply traditional SWMM to real-time urban stormwater management based on web technology. Here we designed and implemented a web service framework based on SWMM (WEB-SWMM), which can provide real-time computing services for urban water management. To test the functionality, efficiency, and stability of the WEB-SWMM, WEB-SWMM was applied to an urban area in China. Test results show that WEB-SWMM could provide real-time computing services stably, quickly, and accurately. In general, the implementation of WEB-SWMM enables traditional SWMM to be quickly and efficiently applied in real-time urban stormwater management. What is more, the web-based hydrological model framework proposed in this paper also applicable to most existing hydrological models.

Journal ArticleDOI
TL;DR: In this paper, a SVM based encryption service model is constructed for which the key generation is from the conventional encryption operation mode with some improvements, and the optimization techniques are taken into account for the key generator in descendant two methods application model that acts computationally more secure specifically for cloud environment.
Abstract: The growth of internet era leads to a major transformation in a storage of data and accessing the applications. One such new trend that promises the endurance is the Cloud computing. Computing resources offered by the Cloud includes the servers, networks, storage, and applications, all as services. With the advent of Cloud, a single application is delivered as a metered service to numerous users, via an Application Programming Interface (API) accessible over the network. The services offered via the Cloud are such as the infrastructure, software, platform, database and web services. The main motivation of this application model is to provide computationally secure key generation to protect the data via encryption. This key generation in the cryptography process falls into three categories in this research work. In the first part, SVM based encryption service model is constructed for which the key generation is from the conventional encryption operation mode with some improvements. To make the process more complex, the optimization techniques are taken into account for the key generation in descendant two methods application model that acts computationally more secure specifically for Cloud environment. The results of security analysis confirm the effectiveness of the proposed application model withstands potentially against various attacks such as Chosen Cipher Attack, Chosen Plain text Attack indistinguishable attacks for files. In case of images, it resists well against statistical and differential attacks. Comparative Analysis shows evidence of the efficiency of the developed pioneering application model quality and strength compared with that of the existing services.

Journal ArticleDOI
TL;DR: A fuzzy discrete multi-objective artificial bee colony (FDMOABC) approach is provided to solve the formulated FMOQWSCP, for which a new fuzzy ranking method is integrated to cope with solutions sorting and anew fuzzy distance measure is used to control and keep the diversity of FDMOABC’s solutions.
Abstract: The multi-objective quality of service (QoS)-driven web service composition problem (MOQWSCP) aims to find the best combinations of atomic web services (i.e. composite service) to answer high quality of the optimized QoS criteria in a way that maximize benefit QoS parameters such as availability and reliability and minimize the negative ones like price and response time, where the users’ requirements should be satisfied. Due to the dynamic environments in which the elementary services are invoked, some services’ QoS parameters are often ambiguous and uncertain, so, it is inappropriate to express them by fixed values. Hence, The QoS parameters are represented by trapezoidal fuzzy numbers. Thus, we formulate MOQWSCP as a fuzzy multi-objective optimization problem (FMOQWSCP). A fuzzy discrete multi-objective artificial bee colony (FDMOABC) approach is provided to solve the formulated FMOQWSCP, for which we have integrated a new fuzzy ranking method to cope with solutions sorting and a new fuzzy distance measure that is used to control and keep the diversity of FDMOABC’s solutions. Furthermore, a fuzzy multi-criteria decision-making method (FMCDMM) is provided to determine the best composite service among the Pareto-optimal solutions generated by FDMOABC. Finally, two kinds of comparisons are performed to validate the performance and the effectiveness of FDMOABC and FMCDMM methods. In the former, the combined FDMOABC and FMCDMM methods is compared against the fuzzy single objective optimization approaches TGA and EFPA, whereas in the later, a multi-objective optimization comparison is performed among FDMOABC and the fuzzy-extended versions of NSGA-II and SPEA2 algorithms.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a recommendation algorithm for security collaborative filtering service that integrates content similarity by applying the content similarity module to extract the semantic similarity information between the Mashup and Web services, the two modules are seamlessly integrated into a deep neural network to accurately and quickly predict the rating information of Mashup for the Web services.
Abstract: Cyber-Physical Systems (CPS) is a security real-time embedded system. CPS integrates the information sensed by the current physical sensors, through high-speed real-time transmission, and then carries out powerful information processing to effectively interact and integrate the physical and the information worlds. With the aim to improve the quality of service, optimize the existing physical space, and increase security, collaborative filtering algorithms have also been widely used in various recommendation models for Internet of Things (IoT) services. However, general collaborative filtering algorithms cannot capture complex interactive information in the sparse Mashup-Web service call matrix, which leads to lower recommendation performance. Based on artificial intelligence technology, this study proposes a recommendation algorithm for security collaborative filtering service that integrates content similarity. A security collaborative filtering module is used to capture the complex interaction information between Mashup and Web services. By applying the content similarity module to extract the semantic similarity information between the Mashup and Web services, the two modules are seamlessly integrated into a deep neural network to accurately and quickly predict the rating information of Mashup for the Web services. Real data set on the intelligent CPS is captured and then compared with mainstream service recommendation algorithms. Experimental results show that the proposed algorithm not only efficiently completes the Web service recommendation task under the premise of sparse data but also shows better accuracy, effectivity, and privacy. Thus, the proposed method is highly suitable for the application of intelligence CPS.

Journal ArticleDOI
TL;DR: By analyzing performance indices and characteristics, cloud enabled multimedia service delivery approach poses advantages, gives promising result and better user experience compare to the conventional adaptive HTTP streaming approach.
Abstract: Web services is software entity allows machine to machine communication, operates as standalone unit and interoperability over network using standard web technologies such as Hypertext transfer pro...

Book ChapterDOI
19 Mar 2021
TL;DR: In this article, the intention of the user is captured by the queries input by the user searches, and query topics are generated by formalizing the input queries using LSTM.
Abstract: Web services are software packages proposed to facilitate machine-to-machine connectivity on an interoperable network. They are the web application elements, and it is possible to post, search, and use them on the Web. The use of the web service has grown and diversified in recent years as the Internet expands. This paper offers an innovative approach for recommending web services. The intention of the user is captured by the queries input by the user searches. The search queries are collected as input and preprocessed. Query topics are generated by formalizing the input queries. The formalized queries, along with the domain knowledge, are classified using LSTM. Upon classification, the top 10% of the search results’ semantic similarity is computed using Jaccard and Lin similarity. Web service repositories such as UDDI and WSDL have been incorporated with similar terms in the dataset to recommend web services.

Journal ArticleDOI
TL;DR: The Context-Aware Services Recommendation based on Temporal-Spatial Effectiveness (named CASR-TSE) method is proposed, which significantly outperforms existing approaches and is much more effective than traditional recommendation techniques for personalized Web service recommendation.
Abstract: Recent years have witnessed the growing research interest in the Context-Aware Recommender System (CARS). CARS for Web service provides opportunities for exploring the important role of temporal and spatial contexts, separately. Although many CARS approaches have been investigated in recent years, they do not fully address the potential of temporal-spatial correlations in order to make personalized recommendation. In this paper, the Context-Aware Services Recommendation based on Temporal-Spatial Effectiveness (named CASR-TSE) method is proposed. We first model the effectiveness of spatial correlations between the user's location and the service's location on user preference expansion before the similarity computation. Second, we present an enhanced temporal decay model considering the weighted rating effect in the similarity computation to improve the prediction accuracy. Finally, we evaluate the CASR-TSE method on a real-world Web services dataset. Experimental results show that the proposed method significantly outperforms existing approaches, and thus it is much more effective than traditional recommendation techniques for personalized Web service recommendation.

Proceedings ArticleDOI
01 Sep 2021
TL;DR: In this article, a robust composition framework for drone delivery services considering changes in the wind patterns in urban areas is proposed, which incorporates the dynamic arrival of drone services at the recharging stations.
Abstract: We propose a novel robust composition framework for drone delivery services considering changes in the wind patterns in urban areas. The proposed framework incorporates the dynamic arrival of drone services at the recharging stations. We propose a Probabilistic Forward Search (PFS) algorithm to select and compose the best drone delivery services under uncertainty. A set of experiments with a real drone dataset is conducted to illustrate the effectiveness and efficiency of the proposed approach.

Journal ArticleDOI
TL;DR: This article presents a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services, and proposes a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications.
Abstract: Web services often impose inter-parameter dependencies that restrict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPISpecification (OAS) provide no support for the formal description of such dependencies, making it hardly possible to automatically discover and interact with services without human intervention. In this article, we present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs. We first present a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services. Then, we propose a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications. Specifically, we present a catalogue of nine analysis operations on IDL documents allowing to compute, for example, whether a given request satisfies all the dependencies of the service. Finally, we present a tool suite including an editor, a parser, an OAS extension, and a constraint programming-aided library supporting IDL specifications and their analyses. Together, these contributions pave the way for a new range of specification-driven applications in areas such as code generation and testing.