scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2019"


Journal ArticleDOI
TL;DR: The latest improvements made to the frameworks which enhance the interconnectivity between public EMBL-EBI resources and ultimately enhance biological data discoverability, accessibility, interoperability and reusability are described.
Abstract: The EMBL-EBI provides free access to popular bioinformatics sequence analysis applications as well as to a full-featured text search engine with powerful cross-referencing and data retrieval capabilities. Access to these services is provided via user-friendly web interfaces and via established RESTful and SOAP Web Services APIs (https://www.ebi.ac.uk/seqdb/confluence/display/JDSAT/EMBL-EBI+Web+Services+APIs+-+Data+Retrieval). Both systems have been developed with the same core principles that allow them to integrate an ever-increasing volume of biological data, making them an integral part of many popular data resources provided at the EMBL-EBI. Here, we describe the latest improvements made to the frameworks which enhance the interconnectivity between public EMBL-EBI resources and ultimately enhance biological data discoverability, accessibility, interoperability and reusability.

3,529 citations


Journal ArticleDOI
TL;DR: A comprehensive review of BIM and IoT integration research to identify common emerging areas of application and common design patterns in the approach to tackling BIM-IoT device integration along with an examination of current limitations and predictions of future research directions is conducted.

418 citations


Journal ArticleDOI
TL;DR: The full text results in PTC significantly increase biomedical concept coverage and it is anticipated this expansion will both enhance existing downstream applications and enable new use cases.
Abstract: PubTator Central (https://www.ncbi.nlm.nih.gov/research/pubtator/) is a web service for viewing and retrieving bioconcept annotations in full text biomedical articles. PubTator Central (PTC) provides automated annotations from state-of-the-art text mining systems for genes/proteins, genetic variants, diseases, chemicals, species and cell lines, all available for immediate download. PTC annotates PubMed (29 million abstracts) and the PMC Text Mining subset (3 million full text articles). The new PTC web interface allows users to build full text document collections and visualize concept annotations in each document. Annotations are downloadable in multiple formats (XML, JSON and tab delimited) via the online interface, a RESTful web service and bulk FTP. Improved concept identification systems and a new disambiguation module based on deep learning increase annotation accuracy, and the new server-side architecture is significantly faster. PTC is synchronized with PubMed and PubMed Central, with new articles added daily. The original PubTator service has served annotated abstracts for ∼300 million requests, enabling third-party research in use cases such as biocuration support, gene prioritization, genetic disease analysis, and literature-based knowledge discovery. We demonstrate the full text results in PTC significantly increase biomedical concept coverage and anticipate this expansion will both enhance existing downstream applications and enable new use cases.

251 citations


Journal ArticleDOI
TL;DR: The paper demonstrates that the approach can provide a viable accountability solution for the online service industry and avoids the low throughput and resource intensive pitfalls associated with Bitcoin’ s “Proof-of-Work” (PoW) mining.
Abstract: Incorporating accountability mechanisms in online services requires effective trust management and immutable, traceable source of truth for transaction evidence. The emergence of the blockchain technology brings in high hopes for fulfilling most of those requirements. However, a major challenge is to find a proper consensus protocol that is applicable to the crowdsourcing services in particular and online services in general. Building upon the idea of using blockchain as the underlying technology to enable tracing transactions for service contracts and dispute arbitration, this paper proposes a novel consensus protocol that is suitable for the crowdsourcing as well as the general online service industry. The new consensus protocol is called “Proof-of-Trust” (PoT) consensus; it selects transaction validators based on the service participants’ trust values while leveraging RAFT leader election and Shamir's secret sharing algorithms. The PoT protocol avoids the low throughput and resource intensive pitfalls associated with Bitcoin’ s “Proof-of-Work” (PoW) mining, while addressing the scalability issue associated with the traditional Paxos-based and Byzantine Fault Tolerance (BFT)-based algorithms. In addition, it addresses the unfaithful behaviors that cannot be dealt with in the traditional BFT algorithms. The paper demonstrates that our approach can provide a viable accountability solution for the online service industry.

167 citations


Book
23 Oct 2019
TL;DR: The Emerging Future: Taking Awareness for Granted Scalability and Usability Final Words INDEX.
Abstract: WHAT IS CONTEXT-AWARE BEHAVIOR? Current Computing Trends: From the Virtual to the Physical Context, Context Awareness, and Situations When Systems Become Context Aware An Overview of This Book References THE STRUCTURE AND ELEMENTS OF CONTEXT-AWARE PERVASIVE SYSTEMS Analogies The Elements of a Context-Aware Pervasive System An Abstract Architecture Infrastructures, Middleware, and Toolkits Issues of Security, Privacy, and Efficiency Summary References CONTEXT-AWARE MOBILE SERVICES The Rise of Mobile Services Context for Mobile Device Users Location-Based Services Ambient Services From Ambient Services to Place-Based E-Communities Enhancing Context-Aware Mobile Services With Mobile Code and Policy: The MHS Example Enhancing Context-Aware Mobile Services With Multiagent Technology: The Example of Proximity-Based Reverse Auctions Summary and Further Developments Acknowledgment References CONTEXT-AWARE ARTIFACTS Aware Objects Architectural Design Space for a Context-Aware Artifact Context-Aware Mobile Phones: An Illustration Summary References CONTEXT-AWARE MOBILE SOFTWARE AGENTS FOR INTERACTION WITH WEB SERVICES IN MOBILE ENVIRONMENTS Agents: Mobile and Intelligent Scenarios A Brief Review of Agent Platforms for Ubiquitous Computing CALMA Architecture Prototype Implementation and Evaluation Summary Acknowledgments References CONTEXT-AWARE ADDRESSING AND COMMUNICATION FOR PEOPLE, THINGS, AND SOFTWARE AGENTS Context-Aware Communication for People Context-Aware Addressing and Commanding for Objects Context-Aware Communication for Software Agents Summary and Conclusion References CONTEXT-AWARE SENSOR NETWORKS Context-Aware Sensors: The Concept A Framework for Context-Aware Sensors Implementation and Application Scenario Summary Acknowledgment References CONTEXT-AWARE SECURITY Traditional Security Issues and Models Context-Aware Security Systems From Context-Aware Security to Context-Aware Safety Summary References CONTEXT AWARENESS AND MIRROR-WORLD MODELS Gelernter's Mirror Worlds Nexus Virtual Worlds, Virtual Environments Digital Cities Aware Spaces: Smart Environments and Smart Spaces Mirror Worlds: Context and Ontologies Summary References CONSTRUCTING CONTEXT-AWARE PERVASIVE SYSTEMS: DECLARATIVE APPROACHES AND DESIGN PATTERNS Representing Situations Five Other Ways to Represent a Meeting Metaprogramming With Situation Programs: Examples Another Declarative Approach Toward Design Patterns for Context-Aware Applications: Situation Patterns Summary Acknowledgment References A FUTURE WITH AWARE SYSTEMS The Emerging Future: Taking Awareness for Granted Scalability and Usability Final Words INDEX

139 citations


Journal ArticleDOI
TL;DR: RESTful APIs are widespread in industry, especially in enterprise applications developed with a microservice architecture, and will provide data via an API over the network using a RESTful web service.
Abstract: RESTful APIs are widespread in industry, especially in enterprise applications developed with a microservice architecture. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, because inputs/outputs are sequences of HTTP requests/responses to a remote server. Many approaches in the literature do black-box testing, because often the tested API is a remote service whose code is not available. In this article, we consider testing from the point of view of the developers, who have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault-finding metrics. However, REST is not a protocol but rather a set of guidelines on how to design resources accessed over HTTP endpoints. For example, there are guidelines on how related resources should be structured with hierarchical URIs and how the different HTTP verbs should be used to represent well-defined actions on those resources. Test-case generation for RESTful APIs that only rely on white-box information of the source code might not be able to identify how to create prerequisite resources needed before being able to test some of the REST endpoints. Smart sampling techniques that exploit the knowledge of best practices in RESTful API design are needed to generate tests with predefined structures to speed up the search. We implemented our technique in a tool called EvoMaster, which is open source. Experiments on five open-source, yet non-trivial, RESTful services show that our novel technique automatically found 80 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such an approach are therefore discussed, such as the handling of SQL databases.

111 citations


Proceedings ArticleDOI
04 Apr 2019
TL;DR: A comprehensive review on the state of the art of Service Mesh is presented and the related challenges and its adoption are discussed and the opportunities for future research in this subject are highlighted.
Abstract: While the technology development towards microservices can significantly improve the speed and agility of software service delivery, it also raises the operational complexity associated with modern applications. This has led to the emergence of Service Mesh, a promising approach to mitigate this situation by introducing a dedicated infrastructure layer over microservices without imposing modification on the service implementations. Aiming to inspire more practical research work in this exploited area, we in this paper present a comprehensive review on the state of the art of Service Mesh and discuss the related challenges and its adoption. Finally, we highlight the opportunities for future research in this subject.

101 citations


Journal ArticleDOI
TL;DR: A processing framework is proposed that seeks to optimize the searching efficiency of typed resources in terms of IoT data, information and knowledge inside an integrated architecture, and the framework includes Data Graph, Information Graph and Knowledge Graph.
Abstract: Web services are middleware designed to support the interoperation between different software systems and devices over the Web. Today, we encounter a variety of situations in which services deployed on the Internet of things (IoT), such as wireless sensor networks, ZigBee networks, and mobile edge computing frameworks, have become a widely used infrastructure that has become more flexible, intelligent and automated. This system supports multimedia applications, E-commerce transactions, business collaborations and information processing. However, how to manage these services has been a popular topic in IoT research. Existing research covers numerous resource models, based on sensors or human interactions. For everything as a service, things are available as a service include products, processes, resource management and security provision. To cope with the challenge of how to manage these services, we present an extension of Data, Information, Knowledge and Wisdom architecture as a resource expression model to construct a systematic approach to modeling both entity and relationship elements. The entity elements are formalized from a fully typed, multiple-related dimensions perspective to obtain a whole frequency-value-based representation of entities in the real world. A relationship model is extended and applied to define resource models based on relationships defined from a semantics perspective that is based on our proposed existence-level reasoning. Then, a processing framework is proposed that seeks to optimize the searching efficiency of typed resources in terms of IoT data, information and knowledge inside an integrated architecture, and the framework includes Data Graph, Information Graph and Knowledge Graph. We concentrate on improving performance in accessing and processing resources and providing resource security protection by utilizing the cost difference of both type conversions of resources and traversing on resources. Finally, an application scenario is simulated to illustrate the usage of the proposed framework. This scenario shows the feasibility and effectiveness of our method, considering the conversion, traversing and storage costs. Our method can help improve the optimization of services and scheduling resources of multimedia systems.

90 citations


Journal ArticleDOI
TL;DR: An integrated QoS prediction approach which unifies the modeling of multi-dimensional QoS data via multi-linear-algebra based concepts of tensor and enables efficient Web service recommendation for mobile clients via tensor decomposition and reconstruction optimization algorithms is proposed.
Abstract: Advances in mobile Internet technology have enabled the clients of Web services to be able to keep their service sessions alive while they are on the move. Since the services consumed by a mobile client may be different over time due to client location changes, a multi-dimensional spatiotemporal model is necessary for analyzing the service consumption relations. Moreover, competitive Web service recommenders for the mobile clients must be able to predict unknown quality-of-service (QoS) values well by taking into account the target client's service requesting time and location, e.g., performing the prediction via a set of multi-dimensional QoS measures. Most contemporary QoS prediction methods exploit the QoS characteristics for one specific dimension, e.g., time or location, and do not exploit the structural relationships among the multi-dimensional QoS data. This paper proposes an integrated QoS prediction approach which unifies the modeling of multi-dimensional QoS data via multi-linear-algebra based concepts of tensor and enables efficient Web service recommendation for mobile clients via tensor decomposition and reconstruction optimization algorithms. In light of the unavailability of measured multi-dimensional QoS datasets in the public domain, this paper also presents a transformational approach to creating a credible multi-dimensional QoS dataset from a measured taxi usage dataset which contains high dimensional time and space information. Comparative experimental evaluation results show that the proposed QoS prediction approach can result in much better accuracy in recommending Web services than several other representative ones.

89 citations


Journal ArticleDOI
TL;DR: To facilitate interoperability among breeding applications, the public plant Breeding Application Programming Interface (BrAPI) is presented, a standardized web service API specification recognized as critical to a number of important large breeding system initiatives as a foundational technology.
Abstract: Motivation: Modern genomic breeding methods rely heavily on very large amounts of phenotyping and genotyping data, presenting new challenges in effective data management and integration. Recently, the size and complexity of datasets have increased significantly, with the result that data are often stored on multiple systems. As analyses of interest increasingly require aggregation of datasets from diverse sources, data exchange between disparate systems becomes a challenge. Results: To facilitate interoperability among breeding applications, we present the public plant Breeding Application Programming Interface (BrAPI). BrAPI is a standardized web service API specification. The development of BrAPI is a collaborative, community-based initiative involving a growing global community of over a hundred participants representing several dozen institutions and companies. Development of such a standard is recognized as critical to a number of important large breeding system initiatives as a foundational technology. The focus of the first version of the API is on providing services for connecting systems and retrieving basic breeding data including germplasm, study, observation, and marker data. A number of BrAPI-enabled applications, termed BrAPPs, have been written, that take advantage of the emerging support of BrAPI by many databases.

85 citations


Journal ArticleDOI
TL;DR: A novel trust assessment framework for the security and reputation of cloud services is proposed that can efficiently and effectively assess the trustworthiness of a cloud service while outperforming other trust assessment methods.
Abstract: The Internet of Things (IoT) provides a new paradigm for the development of heterogeneous and distributed systems, and it has increasingly become a ubiquitous computing service platform. However, due to the lack of sufficient computing and storage resources dedicated to the processing and storage of huge volumes of the IoT data, it tends to adopt a cloud-based architecture to address the issues of resource constraints. Hence, a series of challenging security and trust concerns have arisen in the cloud-based IoT context. To this end, a novel trust assessment framework for the security and reputation of cloud services is proposed. This framework enables the trust evaluation of cloud services in order to ensure the security of the cloud-based IoT context via integrating security- and reputation-based trust assessment methods. The security-based trust assessment method employs the cloud-specific security metrics to evaluate the security of a cloud service. Furthermore, the feedback ratings on the quality of cloud service are exploited in the reputation-based trust assessment method in order to evaluate the reputation of a cloud service. The experiments conducted using a synthesized dataset of security metrics and a real-world web service dataset show that our proposed trust assessment framework can efficiently and effectively assess the trustworthiness of a cloud service while outperforming other trust assessment methods.

Proceedings ArticleDOI
08 Jul 2019
TL;DR: This work considers the edge user allocation problem as an online decision-making and evolvable process and develops a mobility-aware and migration-enabled approach, named MobMig, for allocating users at real-time, which achieves higher user coverage rate and lower reallocations than traditional ones.
Abstract: The rapid development of mobile communication technologies prompts the emergence of mobile edge computing (MEC). As the key technology toward 5th generation (5G) wireless networks, it allows mobile users to offload their computational tasks to nearby servers deployed in base stations to alleviate the shortage of mobile resource. Nevertheless, various challenges, especially the edge-user-allocation problem, are yet to be properly addressed. Traditional studies consider this problem as a static global optimization problem where user positions are considered to be time-invariant and user-mobility-related information is not fully exploited. In reality, however, edge users are usually with high mobility and time-varying positions, which usually result in users reallocations among different base stations and impact on user-perceived quality-of-service (QoS). To overcome the above limitations, we consider the edge user allocation problem as an online decision-making and evolvable process and develop a mobility-aware and migration-enabled approach, named MobMig, for allocating users at real-time. Experiments based on real-world MEC dataset clearly demonstrate that our approach achieves higher user coverage rate and lower reallocations than traditional ones.

Journal ArticleDOI
Gang Huang1, Xuanzhe Liu1, Yun Ma1, Xuan Lu1, Ying Zhang1, Yingfei Xiong1 
TL;DR: This position article describes an Internetware-oriented approach to designing, developing, and deploying situational mobile Web apps, by synthesizing the resources and services of mobile and cloud using a novel Service-Model-View-Controller software model.
Abstract: Mobile Web applications (a.k.a., Web apps) stand for an important trend for next-generation Internet-based software. Currently popular mobile Web apps need to be adapted to various and ever-changing contexts and personalized user requirements. Based on our over-decade research experiences and practice on the Internetware paradigm, this position article describes an Internetware-oriented approach to designing, developing, and deploying situational mobile Web apps, by synthesizing the resources and services of mobile and cloud. Guided by a novel Service-Model-View-Controller (SMVC) software model, a mobile Web app is organized into a well-defined structure that facilitates adaptation including online/offline data access, computation offloading, user interface optimization, hybrid composition, etc. We provide efficient runtime support spanning mobile and cloud to make mobile Web apps more flexibly adaptive. The proof-of-concept evaluation demonstrates that our approach can benefit end-users with optimized user experience of mobile Web apps.

Journal ArticleDOI
TL;DR: Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better application management and deployment in addition to hardware virtualization as mentioned in this paper.
Abstract: Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better application management and deployment in addition to hardware virtualization. They are being widely used by organizations to deploy their increasingly diverse workloads derived from modern-day applications such as web services, big data, and internet of things in either proprietary clusters or private and public cloud data centers. This has led to the emergence of container orchestration platforms, which are designed to manage the deployment of containerized applications in large-scale clusters. These systems are capable of running hundreds of thousands of jobs across thousands of machines. To do so efficiently, they must address several important challenges including scalability, fault tolerance and availability, efficient resource utilization, and request throughput maximization among others. This paper studies these management systems and proposes a taxonomy that identifies different mechanisms that can be used to meet the aforementioned challenges. The proposed classification is then applied to various state-of-the-art systems leading to the identification of open research challenges and gaps in the literature intended as future directions for researchers.

Journal ArticleDOI
TL;DR: A detailed security proof shows that the proposed protocol is provably secure under a random oracle model based on the difficulty of the ring learning with errors problem, and the informal security analysis and experimental implementation show that the protocol is practical for real-world mobile client–server environments.
Abstract: The rapid advances of wireless communication technologies along with the popularity of mobile devices are enabling users to access various web services anywhere and anytime. Due to the openness of wireless communications, security becomes a vital issue. To provide secure communication, many anonymous authentication protocols in mobile client–server environments based on classical mathematical hard assumptions (i.e., discrete logarithm problem or integer factorization problem) have been presented in last two decades. However, both of the two assumptions can be solved by postquantum computers in polynomial time, which means these protocols are never secure in the postquantum era. To mitigate such types of attacks, we propose an ideal lattice-based anonymous authentication protocol for mobile client–server environments. A detailed security proof shows that our proposed protocol is provably secure under a random oracle model based on the difficulty of the ring learning with errors problem. Furthermore, the informal security analysis and experimental implementation show that our proposed protocol is practical for real-world mobile client–server environments.

Journal ArticleDOI
TL;DR: A new GMQL‐based system with enhanced accessibility, portability, scalability and performance in genomic data management, based on the Genomic Data Model and the GenoMetric Query Language is presented.
Abstract: Motivation We previously proposed a paradigm shift in genomic data management, based on the Genomic Data Model (GDM) for mediating existing data formats and on the GenoMetric Query Language (GMQL) for supporting, at a high level of abstraction, data extraction and the most common data-driven computations required by tertiary data analysis of Next Generation Sequencing datasets. Here, we present a new GMQL-based system with enhanced accessibility, portability, scalability and performance. Results The new system has a well-designed modular architecture featuring: (i) an intermediate representation supporting many different implementations (including Spark, Flink and SciDB); (ii) a high-level technology-independent repository abstraction, supporting different repository technologies (e.g., local file system, Hadoop File System, database or others); (iii) several system interfaces, including a user-friendly Web-based interface, a Web Service interface, and a programmatic interface for Python language. Biological use case examples, using public ENCODE, Roadmap Epigenomics and TCGA datasets, demonstrate the relevance of our work. Availability and implementation The GMQL system is freely available for non-commercial use as open source project at: http://www.bioinformatics.deib.polimi.it/GMQLsystem/. Supplementary information Supplementary data are available at Bioinformatics online.

Journal ArticleDOI
TL;DR: This paper introduces a novel automated approach, called SerFinder, to recommend service sets for automatic mashup creation, formulated as a multi-objective combinatorial problem and uses the non-dominated sorting genetic algorithm (NSGA-II) as a search method to extract an optimal set of services to create a given mashup.

Proceedings ArticleDOI
08 Jul 2019
TL;DR: Spock is proposed, a new scalable and elastic control system that exploits both VMs and serverless functions to reduce cost and ensure SLO for elastic web services and yields significant cost savings.
Abstract: We are witnessing the emergence of elastic web services which are hosted in public cloud infrastructures. For reasons of cost-effectiveness, it is crucial for the elasticity of these web services to match the dynamically-evolving user demand. Traditional approaches employ clusters of virtual machines (VMs) to dynamically scale resources based on application demand. However, they still face challenges such as higher cost due to over-provisioning or incur service level objective (SLO) violations due to under-provisioning. Motivated by this observation, we propose Spock, a new scalable and elastic control system that exploits both VMs and serverless functions to reduce cost and ensure SLO for elastic web services. We show that under two different scaling policies, Spock reduces SLO violations of queries by up to 74\% when compared to VM-based resource procurement schemes. Further, Spock yields significant cost savings, by up to 33\% compared to traditional approaches which use only VMs.

Proceedings ArticleDOI
08 Jul 2019
TL;DR: A novel system named Microscaler is presented to automatically identify the scaling-needed services and scale them to meet the service level agreement (SLA) with an optimal cost for micro-service systems, which could achieve the optimal service scale satisfying the SLA requirements.
Abstract: Recently, the microservice becomes a popular architecture to construct cloud native systems due to its agility. In cloud native systems, autoscaling is a core enabling technique to adapt to workload changes by scaling out/in. However, it becomes a challenging problem in a microservice system, since such a system usually comprises a large number of different micro services with complex interactions. When bursty and unpredictable workloads arrive, it is difficult to pinpoint the scaling-needed services which need to scale and evaluate how much resource they need. In this paper, we present a novel system named Microscaler to automatically identify the scaling-needed services and scale them to meet the service level agreement (SLA) with an optimal cost for micro-service systems. Microscaler collects the quality of service metrics (QoS) with the help of the service mesh enabled infrastructure. Then, it determines the under-provisioning or over-provisioning services with a novel criterion named service power. By combining an online learning approach and a step-by-step heuristic approach, Microscaler could achieve the optimal service scale satisfying the SLA requirements. The experimental evaluations in a micro-service benchmark show that Microscaler converges to the optimal service scale faster than several state-of-the-art methods.

Journal ArticleDOI
TL;DR: A tool that enables users to perform classification of scientific literature by text mining-based classification of article abstracts in a web interface, thus enabling curators and researchers to take advantage of the vast amounts of data and information in the published literature.
Abstract: Scientific data and research results are being published at an unprecedented rate. Many database curators and researchers utilize data and information from the primary literature to populate databases, form hypotheses, or as the basis for analyses or validation of results. These efforts largely rely on manual literature surveys for collection of these data, and while querying the vast amounts of literature using keywords is enabled by repositories such as PubMed, filtering relevant articles from such query results can be a non-trivial and highly time consuming task. We here present a tool that enables users to perform classification of scientific literature by text mining-based classification of article abstracts. BioReader (Biomedical Research Article Distiller) is trained by uploading article corpora for two training categories - e.g. one positive and one negative for content of interest - as well as one corpus of abstracts to be classified and/or a search string to query PubMed for articles. The corpora are submitted as lists of PubMed IDs and the abstracts are automatically downloaded from PubMed, preprocessed, and the unclassified corpus is classified using the best performing classification algorithm out of ten implemented algorithms. BioReader supports data and information collection by implementing text mining-based classification of primary biomedical literature in a web interface, thus enabling curators and researchers to take advantage of the vast amounts of data and information in the published literature. BioReader outperforms existing tools with similar functionalities and expands the features used for mining literature in database curation efforts. The tool is freely available as a web service at http://www.cbs.dtu.dk/services/BioReader

Journal ArticleDOI
TL;DR: A new service composition scheme based on Deep Reinforcement Learning (DRL) for adaptive and large-scale service composition is proposed, more suitable for the partially observable service environment, making it work better for real-world settings.
Abstract: In a service-oriented system, simple services are combined to form value-added services to meet users’ complex requirements. As a result, service composition has become a common practice in service computing. With the rapid development of web service technology, a massive number of web services with the same functionality but different non-functional attributes (e.g., QoS) are emerging. The increasingly complex user requirements and the large number of services lead to a significant challenge to select the optimal services from numerous candidates to achieve an optimal composition. Meanwhile, web services accessible via computer networks are inherently dynamic and the environment of service composition is also complex and unstable. Thus, service composition solutions need to be adaptable to the dynamic environment. To address these key challenges, we propose a new service composition scheme based on Deep Reinforcement Learning (DRL) for adaptive and large-scale service composition. The proposed approach is more suitable for the partially observable service environment, making it work better for real-world settings. A recurrent neural network is adopted to improve reinforcement learning, which can predict the objective function and enhance the ability to express and generalize. In addition, we employ the heuristic behavior selection strategy, in which the state set is divided into the hidden and fully observable state sets, to perform the targeted behavior selection strategy when facing with different types of states. The experimental results justify the effectiveness and efficiency, scalability, and adaptability of our methods by showing obvious advantages in composition results and efficiency for service composition.

Journal ArticleDOI
TL;DR: A lightweight context-aware IoT service architecture namely LISA is proposed to support IoT push services in an efficient manner and successfully reduces the information provided to the user by selecting only the most relevant among those.

Journal ArticleDOI
TL;DR: This work proposes a linear programming approach to web service composition problem which is called ‘LP-WSC’ to select the most efficient service per request in a geographically distributed cloud environment for improving the quality-of-service criteria.
Abstract: In recent years, cloud computing has emerged as the most popular technologies for accessing and delivering enterprise applications as the services to the end users over the Internet. Since different enterprises may offer web services with various capabilities, these web services can be combined with other to provide the complete functionality of a large software application to meet the users’ requests. Therefore, the service composition as an NP-hard optimization problem to combine the distributed and heterogeneous web services is introduced as a challenging issue. In this work, we propose a linear programming approach to web service composition problem which is called ‘LP-WSC’ to select the most efficient service per request in a geographically distributed cloud environment for improving the quality-of-service criteria. Finally, we evaluate the effectiveness of our approach under three scenarios with varying the number of atomic services per set. The experimental results indicate that the proposed approach significantly reduces the cost of selection and composition of the services and also increases the availability of services and the reliability of the servers compared with the other approaches.

Journal ArticleDOI
TL;DR: The comprehensive agent composition framework can be integrated into the J-Park Simulator (JPS) knowledge graph, for the automatic creation of a composite agent that simulates the dispersion of the emissions of a power plant within a selected spatial area.

Journal ArticleDOI
TL;DR: This paper introduces differential privacy, a rigorous and provable privacy model, into the process of collaborative QoS prediction, and presents DPS, a method that disguises a user's observed QoS values by applying differential privacy to the user’s QoS data directly.
Abstract: Collaborative Web services QoS prediction has proved to be an important tool to estimate accurately personalized QoS experienced by individual users, which is beneficial for a variety of operations in the service ecosystem, such as service selection, composition and recommendation. While a number of achievements have been attained on the study of improving the accuracy of collaborative QoS prediction, little work has been done for protecting user privacy in this process. In this paper, we propose a privacy-preserving collaborative QoS prediction framework which can protect the private data of users while retaining the ability of generating accurate QoS prediction. We introduce differential privacy, a rigorous and provable privacy model, into the process of collaborative QoS prediction. We first present DPS, a method that disguises a user’s observed QoS values by applying differential privacy to the user’s QoS data directly. We show how to integrate DPS with two representative collaborative QoS prediction approaches. To improve the utility of the disguised QoS data, we present DPA, another QoS disguising method which first aggregates a user’s QoS data before adding noise to achieve differential privacy. We evaluate the proposed methods by conducting extensive experiments on a real world Web services QoS dataset. Experimental results show our approach is feasible in practice.

12 Dec 2019
TL;DR: The JSON-LD as mentioned in this paper is a JSON-based format to serialize Linked Data, which is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSONLD.
Abstract: JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.

Proceedings ArticleDOI
01 Dec 2019
TL;DR: Comprehend Medical as discussed by the authors is a stateless and Health Insurance Portability and Accountability Act (HIPAA) eligible Named Entity Recognition (NER) and Relationship Extraction (RE) service launched under Amazon Web Services (AWS) trained using state-of-the-art deep learning models.
Abstract: Comprehend Medical is a stateless and Health Insurance Portability and Accountability Act (HIPAA) eligible Named Entity Recognition (NER) and Relationship Extraction (RE) service launched under Amazon Web Services (AWS) trained using state-of-the-art deep learning models. Contrary to many existing open source tools, Comprehend Medical is scalable and does not require steep learning curve, dependencies, pipeline configurations, or installations. Currently, Comprehend Medical performs NER in five medical categories: Anatomy, Medical Condition, Medications, Protected Health Information (PHI) and Treatment, Test and Procedure (TTP). Additionally, the service provides relationship extraction for the detected entities as well as contextual information such as negation and temporality in the form of traits. Comprehend Medical provides two Application Programming Interfaces (API): 1) the NERe API which returns all the extracted named entities, their traits and the relationships between them and 2) the PHId API which returns just the protected health information contained in the text. Furthermore, Comprehend Medical is accessible through AWS Console, Java and Python Software Development Kit (SDK), making it easier for non-developers and developers to use.

01 Jan 2019
TL;DR: It is concluded that the materialization of the IoT concept would not be possible without recent technological advances and that the construction of solutions based on IoT concepts are currently feasible, both technically and financially.
Abstract: Despite the idea of connecting multiple objects to a common network so that data can be exchanged between themselves have arisen in the 1990s, receiving then the name of Internet of Things, recent technological advances have boosted public interest in the subject. In this context, we sought to offer a comprehensive view on this topic through the study of the reference bibliography, exploring some theoretical aspects about the nature of IoT, such as the state of art, the different types of concepts and definitions that seek to analyze it from different perspectives, the technologies that allow its materialization and the possible practical applications that arise with this materialization, such as the Industry 4.0, the Smart Home, the Smart Car and the applications in healthcare. Next, a practical application prototype capable of remote monitoring a tank based on the principles of cloud computing was developed through the use of a gateway connected to a web service, in order to demonstrate the feasibility of implementing an IoT application in the current industrial scenario. Finally, it is concluded that the materialization of the IoT concept would not be possible without recent technological advances and that the construction of solutions based on IoT concepts are currently feasible, both technically and financially.

Journal ArticleDOI
TL;DR: The integration of Dynamo BIM and web service APIs might be useful for site assessments in the early design stage or even earlier and implementations of use cases including assessments of Access to Quality Transit and Diverse Uses in LEED v4.

Journal ArticleDOI
08 Mar 2019-Sensors
TL;DR: This research quantitatively evaluates the data quality of a non-conventional, low-cost and fully open system that produces data of appropriate quality for natural resource and risk management.
Abstract: In low-income and developing countries, inadequate weather monitoring systems adversely affect the capacity of managing natural resources and related risks. Low-cost and IoT devices combined with a large diffusion of mobile connection and open technologies offer a possible solution to this problem. This research quantitatively evaluates the data quality of a non-conventional, low-cost and fully open system. The proposed novel solution was tested for a duration of 8 months, and the collected observations were compared with a nearby authoritative weather station. The experimental weather station is based in Arduino and transmits data through the 2G General Packet Radio Service (GPRS) to the istSOS which is a software to set-up a web service to collect, share and manage observations from sensor networks using the Sensor Observation Service (SOS) standard of the Open Geospatial Consortium (OGC). The results demonstrated that this accessible solution produces data of appropriate quality for natural resource and risk management.