scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2022"


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors constructed a Web APIs correlation graph that incorporates functional descriptions and compatibility information of Web APIs, and then proposed a correlation graph-based approach for personalized and compatible Web APIs recommendation in mobile app development.
Abstract: Using Web APIs registered in service sharing communities for mobile APP development can not only reduce development period and cost, but also fully reuse state-of-the-art research outcomes in broad domain so as to ensure up-to-date APP development and applications. However, the big volume of available APIs in Web communities as well as their differences make it difficult for APIs selection considering compatibility, preferred partial APIs and expected APIs functions which are often of high variety. Accordingly, how to recommend a set of functional-satisfactory and compatibility-optimal APIs based on the APP developer's multiple function expectation and pre-chosen partial APIs is on demand as a significant challenge for successful APP development. To address this challenge, we first construct a Web APIs correlation graph that incorporates functional descriptions and compatibility information of Web APIs, and then propose a correlation graph-based approach for personalized and compatible Web APIs recommendation in mobile APP development. Finally, through extensive experiments on a real dataset crawled from Web APIs websites, we prove the feasibility of our proposed recommendation approach.

47 citations


Posted ContentDOI
18 Mar 2022-bioRxiv
TL;DR: An easy-to-use web service, Hiplot, equipping with comprehensive and interactive biomedical data visualization functions (230+) including basic statistics, multi-omics, regression, clustering, dimensional reduction, meta-analysis, survival analysis, risk modeling, etc, is proposed.
Abstract: Modern web techniques provide an unprecedented opportunity for leveraging complex biomedical data generating in clinical, omics, and mechanism experiments. Currently, the functions for carrying out publication-ready biomedical data visualization represent primary technical hurdles in the state-of-art omics-based web services, whereas the demand for visualization-based interactive data mining is ever-growing. Here, we propose an easy-to-use web service, Hiplot (https://hiplot.com.cn), equipping with comprehensive and interactive biomedical data visualization functions (230+) including basic statistics, multi-omics, regression, clustering, dimensional reduction, meta-analysis, survival analysis, risk modeling, etc. We used the demo and real datasets to demonstrate the usage workflow and the core functions of Hiplot. It permits users to conveniently and interactively complete a few specialized visualization tasks that previously could only be done by senior bioinformatics or biostatistics researchers. A modern web client with efficient user interfaces and interaction methods has been implemented based on the custom components library and the extensible plugin system. The versatile output can also be produced in different environments via using the cross-platform portable command-line interface (CLI) program, Hctl. A switchable view between the editable data table and the file uploader/path selection could facilitate data importing, previewing, and exporting, while the plumber-based response strategy significantly reduced the time costs for generating basic scientific graphics. Diversified layouts, themes/styles, and color palettes in this website allow users to create high-quality and publication-ready graphics. Researchers devoted to both life and data science may benefit from the emerging web service.

46 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a survey of the state-of-the-art collaborative filtering-based approaches for Web service QoS prediction and discuss their features and differences.
Abstract: With the growing number of competing Web services that provide similar functionality, Quality-of-Service (QoS) prediction is becoming increasingly important for various QoS-aware approaches of Web services. Collaborative filtering (CF), which is among the most successful personalized prediction techniques for recommender systems, has been widely applied to Web service QoS prediction. In addition to using conventional CF techniques, a number of studies extend the CF approach by incorporating additional information about services and users, such as location, time, and other contextual information from the service invocations. There are also some studies that address other challenges in QoS prediction, such as adaptability, credibility, privacy preservation, and so on. In this survey, we summarize and analyze the state-of-the-art CF QoS prediction approaches of Web services and discuss their features and differences. We also present several Web service QoS datasets that have been used as benchmarks for evaluating the predition accuracy and outline some possible future research directions.

22 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a recommendation algorithm for a security collaborative filtering service that integrates content similarity, and the two modules are seamlessly integrated into a deep neural network to accurately and quickly predict the rating information of Mashup for the Web services.
Abstract: Cyber–physical systems (CPSs) is a security real-time embedded system. CPS integrates the information sensed by the current physical sensors, through high-speed real-time transmission, and then carries out powerful information processing to effectively interact and integrate the physical and the information worlds. With the aim to improve the quality of service, optimize the existing physical space, and increase security, collaborative filtering algorithms have also been widely used in various recommendation models for Internet of Things (IoT) services. However, general collaborative filtering algorithms cannot capture complex interactive information in the sparse Mashup–Web service call matrix, which leads to lower recommendation performance. Based on the artificial intelligence technology, this study proposes a recommendation algorithm for a security collaborative filtering service that integrates content similarity. A security collaborative filtering module is used to capture the complex interaction information between Mashup and Web services. By applying the content similarity module to extract the semantic similarity information between the Mashup and Web services, the two modules are seamlessly integrated into a deep neural network to accurately and quickly predict the rating information of Mashup for the Web services. Real data set on the intelligent CPS is captured and then compared with mainstream service recommendation algorithms. Experimental results show that the proposed algorithm not only efficiently completes the Web service recommendation task under the premise of sparse data but also shows better accuracy, effectivity, and privacy. Thus, the proposed method is highly suitable for the application of intelligence CPS.

16 citations


Journal ArticleDOI
TL;DR: In this article , the authors introduced prediction methods and divided them into three main categories: memory-based methods, model based methods, and Collaborative Filtering (CF) methods combined with other methods.
Abstract: Nowadays, there are many Web services with similar functionality on the Internet. Users consider Quality of Service (QoS) of the services to select the best service from among them. The prediction of QoS values of the Web services and recommendations of the best service based on these values to the users is one of the major challenges in the web service area. Major studies in this field use collaboration filtering based methods for prediction. The paper introduced prediction methods and divided them into three main categories: memory-based methods, model-based methods, and Collaborative Filtering (CF) methods combined with other methods. In each category, some of the most famous studies were introduced, and then the problems and benefits of each category were reviewed. Finally, we have a discussion about these methods and propose suggestions for future works.

14 citations


Journal ArticleDOI
TL;DR: In this article , a cloud solution is proposed to build an Internet of Things (IoT) platform applied in a greenhouse crop production context, where real-time and historical data, as well as prediction models, can be accessed by means of representational state transfer (RESTful) Web services developed for such a purpose.
Abstract: This article proposes a cloud solution to build an Internet of Things (IoT) platform applied in a greenhouse crop production context. Real-time and historical data, as well as prediction models, can be accessed by means of representational state transfer (RESTful) Web services developed for such a purpose. Forecasting is also provided following a Greenhouse Models as a Service (GMaaS) approach. Currently, our GMaaS tool provides forecasting based on computational models developed for inside climate, crop production, and irrigation processes. Traditionally, such models are hardcoded in applications or embedded in software tools to be used as decision support systems (DSSs). However, using a GMaaS approach, models are available as RESTful services to be used as needed. In addition, the proposed platform allows users to register new IoT devices and their greenhouse data in the FIWARE platform, providing a cloud-scale solution for the case study. RESTful services of the proposed platform are also used by a Web application, allowing users to interact easily with the system.

12 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel deep learning based approach called DeepTSQP to perform the task of temporal-aware service QoS prediction by feature integration, which first presents an improved temporal feature representation of users and services by integrating binarization feature and similarity feature.
Abstract: Quality of service (QoS) has been mostly applied to represent non-functional properties of web services and differentiate those with the same functionality. How to accurately predict unknown service QoS has become a hot research issue. Although existing researches have been investigated on temporal-aware service QoS prediction, conventional approaches are restricted to a couple of limitations. (1) most of them cannot well mine the time-series relationships and the interaction invocation information among users and services. (2) even although some sophisticated approaches make use of recurrent neural networks for temporal service QoS prediction, they mainly focus on the learning of user-service temporal relationship and have paid less attention to more effectively represent implicit features, resulting in low accuracy on service QoS prediction. To deal with the challenges, we propose a novel deep learning based approach called DeepTSQP to perform the task of temporal-aware service QoS prediction by feature integration. In DeepTSQP, we first present an improved temporal feature representation of users and services by integrating binarization feature and similarity feature. Then, we propose a deep neural network with gated recurrent units (GRU), learning and mining temporal features among users and services. Finally, DeepTSQP model can be trained by parameter optimization and applied to predict unknown service QoS. Extensive experiments are conducted on a large-scale real-world temporal QoS dataset WS-Dream with 27,392,643 historical QoS invocation records. The results demonstrate that DeepTSQP significantly outperforms state-of-the-art approaches for temporal-aware service QoS prediction in terms of multiple evaluation metrics.

12 citations


Journal ArticleDOI
05 Jan 2022-Symmetry
TL;DR: The proposed model enhances the processes of web service selection and composition by minimizing the number of integrated Web Services, using the Multistage Forward Search (MSF), and uses the Spider Monkey Optimization (SMO) algorithm, which improves the services provided with regards to fundamentals of service composition methods symmetry and variations.
Abstract: Web service composition allows developers to create and deploy applications that take advantage of the capabilities of service-oriented computing. Such applications provide the developers with reusability opportunities as well as seamless access to a wide range of services that provide simple and complex tasks to meet the clients’ requests in accordance with the service-level agreement (SLA) requirements. Web service composition issues have been addressed as a significant area of research to select the right web services that provide the expected quality of service (QoS) and attain the clients’ SLA. The proposed model enhances the processes of web service selection and composition by minimizing the number of integrated Web Services, using the Multistage Forward Search (MSF). In addition, the proposed model uses the Spider Monkey Optimization (SMO) algorithm, which improves the services provided with regards to fundamentals of service composition methods symmetry and variations. It achieves that by minimizing the response time of the service compositions by employing the Load Balancer to distribute the workload. It finds the right balance between the Virtual Machines (VM) resources, processing capacity, and the services composition capabilities. Furthermore, it enhances the resource utilization of Web Services and optimizes the resources’ reusability effectively and efficiently. The experimental results will be compared with the composition results of the Smart Multistage Forward Search (SMFS) technique to prove the superiority, robustness, and effectiveness of the proposed model. The experimental results show that the proposed SMO model decreases the service composition construction time by 40.4%, compared to the composition time required by the SMFS technique. The experimental results also show that SMO increases the number of integrated ted web services in the service composition by 11.7%, in comparison with the results of the SMFS technique. In addition, the dynamic behavior of the SMO improves the proposed model’s throughput where the average number of the requests that the service compositions processed successfully increased by 1.25% compared to the throughput of the SMFS technique. Furthermore, the proposed model decreases the service compositions’ response time by 0.25 s, 0.69 s, and 5.35 s for the Excellent, Good, and Poor classes respectively compared to the results of the SMFS Service composition response times related to the same classes.

11 citations


Journal ArticleDOI
TL;DR: Graph4Web as discussed by the authors uses a relation-aware graph attention network for web service classification, which first parse the web service description sequence into the dependency graph and initialize the embedding vector of each node in the graph by tuning the BERT model.

10 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a model called TE-GSDMM (topic enhanced Gibbs sampling algorithm for the Dirichlet Multinomial Mixture model) to enhance Web service clustering by improving the quality of service representation vectors and integrating service collaboration similarity.

10 citations


Journal ArticleDOI
TL;DR: The Tiny Matchmaking Engine (TME) as discussed by the authors is a matchmaking and reasoning engine for the Web Ontology Language (OWL), designed and implemented with a compact and portable C core.


Journal ArticleDOI
TL;DR: In this paper , the authors present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs using a domain-specific language called Inter-Parameter Dependency Language (IDL).
Abstract: Web services often impose inter-parameter dependencies that restrict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPI Specification (OAS) provide no support for the formal description of such dependencies, which makes it hardly possible to automatically discover and interact with services without human intervention. In this article, we present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs. We first present a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services. Then, we propose a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications using standard CSP-based reasoning operations. Specifically, we present a catalogue of seven analysis operations on IDL documents allowing to compute, for example, whether a given request satisfies all the dependencies of the service. Finally, we present a tool suite including an editor, a parser, an OAS extension, a constraint programming-aided library, and a test suite supporting IDL specifications and their analyses. Together, these contributions pave the way for a new range of specification-driven applications in areas such as code generation and testing.

Proceedings ArticleDOI
27 Jul 2022
TL;DR: The first emergence of the Web was called the static web or Web 1.0, which was read-only as mentioned in this paper , followed by the Social Web or Web 2.0 which was interactive in nature and the users could do more than read static pages.
Abstract: The web we use today has seen many iterations over the years since the original concept of the World Wide Web was introduced in early 1990s.The first emergence of the web was called the static web or Web 1.0, which was read-only. A further iteration of the web then came along and was called the Social Web or Web 2.0, which was interactive in nature and the users could do more than read static pages. This was readable and writable, and saw the emergence of numerous social platforms. Web 3.0 offers an unmediated read–write web, or, to put it another way, a decentralized Internet. This paper provides a brief idea of how the journey of the web has so far transitioned from Web 1.0 through Web 2.0 and now onto Web 3.0, and what all lies ahead in future with emerging technologies and Web 3.0.

Journal ArticleDOI
TL;DR: An improved eagle strategy algorithm method is proposed to increase the performance that directly indicates an improvement in computation time in large-scale DWSC in a cloud-based platform and on both functional and non-functional attributes of services.
Abstract: The Internet of Things (IoT) is now expanding and becoming more popular in most industries, which leads to vast growth in cloud computing. The architecture of IoT is integrated with cloud computing through web services. Recently, Dynamic Web Service Composition (DWSC) has been implemented to fulfill the IoT and business processes. In recent years, the number of cloud services has multiplied, resulting in cloud services providing similar services with similar functionality but varying in Quality of Services (QoS), for instance, on the response time of web services; however, existing methods are insufficient in solving large-scale repository issues. Bio-inspired algorithm methods have shown better performance in solving the large-scale service composition problems, unlike deterministic algorithms, which are restricted. Thus, an improved eagle strategy algorithm method is proposed to increase the performance that directly indicates an improvement in computation time in large-scale DWSC in a cloud-based platform and on both functional and non-functional attributes of services. By proposing the improved bio-inspired method, the computation time can be improved, especially in a large-scale repository of IoT.

Journal ArticleDOI
TL;DR: In this paper , the authors present a systematic literature review of the state-of-the-art techniques based on web service clustering and present the various mandatory and optional steps of WSC, evaluation measures, and datasets.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a compositional semantics-based service bundle recommendation model (CSBR), which is based on a semantic service package repository, which is constructed by mining the existing mashups.
Abstract: An increasing number of services are being offered which leads to difficulties in choosing appropriate services during mashup development. Currently, several service recommendation techniques have been developed for mashup creation, however, they are largely limited to suggesting services which have similar functionalities. The fundamental problem with these techniques is that they do not consider the large semantic gap between mashup descriptions and service descriptions. In this article, we propose a compositional semantics-based service bundle recommendation model (CSBR) to tackle this problem. CSBR is based on a semantic service package repository, which is constructed by mining the existing mashups. Specifically, the reusable service packages, which consist of multiple collaborative services, are annotated with composite semantics rather than their original semantics. Based on the semantic service package repository, CSBR can recommend a bundle of services that cover the functional requirements of the mashup as completely as possible. Extensive experiments are conducted on a real-world dataset and the results show CSBR achieves significant performance improvements in both precision and recall metrics over the state-of-the-art methods.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel solution by taking into account the contextual (more specifically, location) information of both services and users, which includes two key steps: (a) hybrid filtering, and (b) hierarchical prediction mechanism.
Abstract: With the proliferation of Internet-of-Things and continuous growth in the number of web services at the Internet-scale, the service recommendation is becoming a challenge nowadays. One of the prime aspects influencing the service recommendation is the Quality-of-Service (QoS) parameter, which depicts the performance of a web service. In general, the service provider furnishes the value of the QoS parameters before service deployment. However, in reality, the QoS values of service vary across different users, time, locations, etc. Therefore, estimating the QoS value of service before its execution is an important task, and thus, the QoS prediction has gained significant research attention. Multiple approaches are available in the literature for predicting service QoS. However, these approaches are yet to reach the desired accuracy level. In this article, we study the QoS prediction problem across different users, and propose a novel solution by taking into account the contextual (more specifically, location) information of both services and users. Our proposal includes two key steps: (a) hybrid filtering, and (b) hierarchical prediction mechanism. On the one hand, the hybrid filtering aims to obtain a set of similar users and services, given a target user and a service. On the other hand, the goal of the hierarchical prediction mechanism is to estimate the QoS value accurately by leveraging hierarchical neural-regression. We evaluated our framework on the publicly available WS-DREAM datasets. The experimental results show the outperformance of our framework over the major state-of-the-art approaches.

Journal ArticleDOI
TL;DR: In this article , a granule distribution-aware SVM (support vector machine) model was proposed for service recommendation, which takes advantage of granular computing to identify similar users, refine the training service set, and decrease the influence of unfaithful ratings.
Abstract: With the wide application of service technologies in various fields, the number of services is increasing dramatically. So, a function is often provided by many services of different QoS (quality of service). Thus, due to the lack of professional knowledge, users often face great difficulties in finding the right services they really need. Hence, accurate and efficient service recommendation is not just an effective way of service advertising, but also an important means to promote user experience. Most of traditional service recommendation methods are based on predictions of QoS values. However, because of the dynamic nature of the Internet, it is hard to guarantee the predicted values are consistent with the actual values. This article proposes a granule distribution-aware SVM (support vector machine) model for service recommendation, namely GDSVM4SR. It takes advantages of granular computing to identify similar users, refine the training service set, and decrease the influence of unfaithful ratings. And then, GDSVM4SR trains an SVM separating hyperplane to sort unknown services and generate recommendations, thus it can avoid the prediction of QoS values. Experimental results show that the proposed GDSVM4SR outperforms several state-of-the-art methods in terms of the efficiency and the precision of recommendation.

Journal ArticleDOI
TL;DR: In this article , the authors proposed an open standard called Decentralized Open Web Cryptographic Standard (DOWCS) and reference implementation for decentralized protection of sensitive data, taking OAuth and PGP as reference models.

Proceedings ArticleDOI
01 May 2022
TL;DR: This paper conducts the first black-box study measuring the extent of ReDoS vulnerabilities in live web services, identifying a service's regex-based input sanitization in its HTML forms or its API, finding vulnerable regexes among these regexes, craft ReDoS probes, and pinpoint vulnerabilities.
Abstract: Web services use server-side input sanitization to guard against harmful input. Some web services publish their sanitization logic to make their client interface more usable, e.g., allowing clients to debug invalid requests locally. However, this usability practice poses a security risk. Specifically, services may share the regexes they use to sanitize input strings - and regex-based denial of service (ReDoS) is an emerging threat. Although prominent service outages caused by ReDoS have spurred interest in this topic, we know little about the degree to which live web services are vulnerable to ReDoS. In this paper, we conduct the first black-box study measuring the extent of ReDoS vulnerabilities in live web services. We apply the Consistent Sanitization Assumption: that client-side sanitization logic, including regexes, is consistent with the sanitization logic on the server-side. We identify a service's regex-based input sanitization in its HTML forms or its API, find vulnerable regexes among these regexes, craft ReDoS probes, and pinpoint vulnerabilities. We analyzed the HTML forms of 1,000 services and the APIs of 475 services. Of these, 355 services publish regexes; 17 services publish unsafe regexes; and 6 services are vulnerable to ReDoS through their APIs (6 domains; 15 subdomains). Both Microsoft and Amazon Web Services patched their web services as a result of our disclosure. Since these vulnerabilities were from API specifications, not HTML forms, we proposed a ReDoS defense for a popular API validation library, and our patch has been merged. To summarize: in client-visible sanitization logic, some web services advertise Re-DoS vulnerabilities in plain sight. Our results motivate short-term patches and long-term fundamental solutions. “Make measurable what cannot be measured.” -Galileo Galilei


Journal ArticleDOI
TL;DR: In this paper , a new hybridization of the firefly optimization algorithm with fuzzy logic based web service composition model (F3L-WSCM) in a cloud environment for location awareness is presented.
Abstract: Recent advancements in cloud computing (CC) technologies signified that several distinct web services are presently developed and exist at the cloud data centre. Currently, web service composition gains maximum attention among researchers due to its significance in real-time applications. Quality of Service (QoS) aware service composition concerned regarding the election of candidate services with the maximization of the whole QoS. But these models have failed to handle the uncertainties of QoS. The resulting QoS of composite service identified by the clients become unstable and subject to risks of failing composition by end-users. On the other hand, trip planning is an essential technique in supporting digital map services. It aims to determine a set of location based services (LBS) which cover all client intended activities quantified in the query. But the available web service composition solutions do not consider the complicated spatio-temporal features. For resolving this issue, this study develops a new hybridization of the firefly optimization algorithm with fuzzy logic based web service composition model (F3L-WSCM) in a cloud environment for location awareness. The presented F3L-WSCM model involves a discovery module which enables the client to provide a query related to trip planning such as flight booking, hotels, car rentals, etc. At the next stage, the firefly algorithm is applied to generate composition plans to minimize the number of composition plans. Followed by, the fuzzy subtractive clustering (FSC) will select the best composition plan from the available composite plans. Besides, the presented F3L-WSCM model involves four input QoS parameters namely service cost, service availability, service response time, and user rating. An extensive experimental analysis takes place on CloudSim tool and exhibit the superior performance of the presented F3L-WSCM model in terms of accuracy, execution time, and efficiency.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a dynamic random testing (DRT) technique for web services, which is an improvement over the widely-practiced random testing and partition testing (PT) approaches.
Abstract: In recent years, service oriented architecture (SOA) has been increasingly adopted to develop distributed applications in the context of the Internet. To develop reliable SOA-based applications, an important issue is how to ensure the quality of web services. In this article, we propose a dynamic random testing (DRT) technique for web services, which is an improvement over the widely-practiced random testing (RT) and partition testing (PT) approaches. We examine key issues when adapting DRT to the context of SOA, including a framework, guidelines for parameter settings, and a prototype for such an adaptation. Empirical studies are reported where DRT is used to test three real-life web services, and mutation analysis is employed to measure the effectiveness. Our experimental results show that, compared with the three baseline techniques, RT, Adaptive Testing (AT) and Random Partition Testing (RPT), DRT demonstrates higher fault-detection effectiveness with a lower test case selection overhead. Furthermore, the theoretical guidelines of parameter setting for DRT are confirmed to be effective. The proposed DRT and the prototype provide an effective and efficient approach for testing web services.

Journal ArticleDOI
TL;DR: This research produces RESTful web services applications with access token and API key features as a security system with extreme programming development methods at Geprek Chicken Dinner.
Abstract: Industry (business) in the culinary field in Indonesia is a business that is experiencing very rapid growth. Improving the quality of service in a business will have implications for the maximum profit obtained. The right technology to connect various platforms is web services. Geprek Chicken Dinner management uses two different applications for its business processes. This of course hampers business processes and requires time and energy which causes business processes to be not optimal. Therefore, the purpose of research on the design and implementation of RESTful web services at Geprek Chicken Dinner. This research produces RESTful web services applications with access token and API key features as a security system with extreme programming development methods. the black box method involves auditors and IT experts getting successful conclusions from various types of tests. This application is expected to be able to serve as a base for all processed data and as a liaison between applications that will be used by management, so that the applications used are integrated with each other.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed to cluster Web services by utilizing both description documents and the structural information from the service relationship network, which can improve the quality and efficiency of service discovery and management within a service repository.
Abstract: Clustering Web services can improve the quality and efficiency of service discovery and management within a service repository. Nowadays, Web services frequently interact (e.g., composition relation and tag sharing relation) with each other to form a complex and heterogeneous service relationship network. The rich network relations inherently reflect either positive or negative clustering association between Web services, which can be a strong supplement to service semantics for characterizing functional affinities between Web services. In this paper, we propose to cluster Web services by utilizing both description documents and the structural information from the service relationship network. We first learn the content semantic information from service description documents based on the widely used Doc2vec model, and meanwhile, learn the structural semantic information from the service relationship network based on a network representation learning algorithm. Then, we propose to pretrain the content and structural semantic information to obtain the most relevant and unified features through training a service classification model with partially labeled data. Finally, a spectral clustering algorithm is utilized for Web services clustering based on the above unified features with preserved content and structural semantics. Therefore, the proposed services clustering approach takes advantage of both service content semantic and service network structure semantic based similarity between services. Extensive experiments are conducted on a real-world dataset from ProgrammableWeb, composed of 12919 Web API services. Experimental results demonstrate that our approach yields an improvement of 4.78% in precision and 5.4% in recall over the state-of-the-art method.

Journal ArticleDOI
24 Nov 2022
TL;DR: In this paper , a web administration local area idea of the web administration informal community to essentially lessen the inquiry space has been proposed to deal with the robust making of web administrations given practical and non-utilitarian properties and individual inclinations.
Abstract: Dynamic design or coordination stays one of the significant difficulties of web administration innovation. This article gives an inventive way to deal with the robust making of web administrations given practical and non-utilitarian properties and individual inclinations. This approach utilizes long-range informal communication web administrations to help the connection between web administrations, choice, and formation of web benefits all the more firmly connected with client inclinations. We utilize the web administration local area idea of the web administration informal community to essentially lessen the inquiry space. This people group was made with the immediate interest of web specialist co-ops.

Proceedings ArticleDOI
25 Apr 2022
TL;DR: OmniCluster combines a one-dimensional convolutional autoencoder (1D-CAE), which extracts the main features of system instances, with a simple, novel, yet effective three-step feature selection strategy to accurately and efficiently cluster system instances for large-scale Web services.
Abstract: System instance clustering is crucial for large-scale Web services because it can significantly reduce the training overhead of anomaly detection methods. However, the vast number of system instances with massive time points, redundant metrics, and noise bring significant challenges. We propose OmniCluster to accurately and efficiently cluster system instances for large-scale Web services. It combines a one-dimensional convolutional autoencoder (1D-CAE), which extracts the main features of system instances, with a simple, novel, yet effective three-step feature selection strategy. We evaluated OmniCluster using real-world data collected from a top-tier content service provider providing services for one billion+ monthly active users (MAU), proving that OmniCluster achieves high accuracy (NMI=0.9160) and reduces the training overhead of five anomaly detection models by 95.01% on average.

Journal ArticleDOI
TL;DR: In this paper , a new heuristic trust-aware web service composition approach (CWS_SMA) is proposed to avoid disclosure of sensitive information by selecting services where their mutual trust is high while preserving a good quality of service.
Abstract: In the next few years, the number of devices connected to the web will increase dramatically. This has encouraged the development of complexes and intelligent service-based applications using new paradigms and technologies such as edge computing and cloud computing. These applications are mainly built by the interaction between heterogonous web services. Answering end-users’ queries demands that these services collect, integrate and communicate sensitive data, which makes the vulnerability of data a serious issue. Thereby, this issue is a critical problem in the automatic services composition while users are frequently unquieted about the security and control of their sensitive data. To address this problem, our main idea is that avoiding disclosure of sensitive information can be ensured by selecting services where their mutual trust is high while preserving a good quality of service (QoS) as possible. A new Heuristic Trust-aware Web Services Composition Approach (CWS_SMA) is proposed. CWS_SMA aims to: (i) compute an optimal trust and QoS-aware web services composition based on the mathematical coordinate system, (ii) improve the composition response time based on a set of cooperative intelligent agents. Eventually, the simulations results show the effectiveness of the proposed solution.

Proceedings ArticleDOI
22 Sep 2022
TL;DR: A model is built that describes a web site's security state and a method is used for estimating how secure the modeled web is and how likely it would become a victim of compromise is proposed.
Abstract: Web sites and services are probably the most used digital channels today, from ordinary web-sites to cloud services that enable many aspects of our digital lives. Due to the popularity of the web, it is also a very common target of cyber attacks that typically focus either on web application itself or on the underlying server infrastructure. Regarding the highest level of the stack - the web application - there are many available frameworks and content management systems (CMS) for rapid web development, from the ones more oriented to developers (e.g. Spring, Django) to the ones that focus on end users (e.g. Wordpress, Joomla). Typical problem with using a framework or a CMS is the need for constant care of its security, which is done by regular patching of the systems. When going a bit lower towards the web server, one can observe the security related features that might or might not be implemented on the server, such as header security (e.g. cookie related flags, force of encryption etc.). The state of all the mentioned parameters can well be obtained by web crawlers that can browse the web and collect specific information about web applications, sites and servers that run them. In this paper, we propose a model for estimating the possibility of web compromise based on the historical crawler collected data. Due to large amounts of data that can be gathered from the web sites and, especially, indication of compromise of particular web sites, we can determine what factors might lead to a compromise in near future. In this sense, we propose a method for analyzing web site data with respect to known compromises from historical data. We build a model that describes a web site's security state and use the method for estimating how secure the modeled web is and how likely it would become a victim of compromise.