scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2014"


Journal ArticleDOI
TL;DR: The study shows that the research work is greatly benefited from such an IIS, not only in data collection supported by IoT, but also in Web services and applications based on cloud computing and e-Science platforms, and the effectiveness of monitoring processes and decision-making can be obviously improved.
Abstract: Climate change and environmental monitoring and management have received much attention recently, and an integrated information system (IIS) is considered highly valuable. This paper introduces a novel IIS that combines Internet of Things (IoT), Cloud Computing, Geoinformatics [remote sensing (RS), geographical information system (GIS), and global positioning system (GPS)], and e-Science for environmental monitoring and management, with a case study on regional climate change and its ecological effects. Multi-sensors and Web services were used to collect data and other information for the perception layer; both public networks and private networks were used to access and transport mass data and other information in the network layer. The key technologies and tools include real-time operational database (RODB); extraction-transformation-loading (ETL); on-line analytical processing (OLAP) and relational OLAP (ROLAP); naming, addressing, and profile server (NAPS); application gateway (AG); application software for different platforms and tasks (APPs); IoT application infrastructure (IoT-AI); GIS and e-Science platforms; and representational state transfer/Java database connectivity (RESTful/JDBC). Application Program Interfaces (APIs) were implemented in the middleware layer of the IIS. The application layer provides the functions of storing, organizing, processing, and sharing of data and other information, as well as the functions of applications in environmental monitoring and management. The results from the case study show that there is a visible increasing trend of the air temperature in Xinjiang over the last 50 years (1962-2011) and an apparent increasing trend of the precipitation since the early 1980s. Furthermore, from the correlation between ecological indicators [gross primary production (GPP), net primary production (NPP), and leaf area index (LAI)] and meteorological elements (air temperature and precipitation), water resource availability is the decisive factor with regard to the terrestrial ecosystem in the area. The study shows that the research work is greatly benefited from such an IIS, not only in data collection supported by IoT, but also in Web services and applications based on cloud computing and e-Science platforms, and the effectiveness of monitoring processes and decision-making can be obviously improved. This paper provides a prototype IIS for environmental monitoring and management, and it also provides a new paradigm for the future research and practice; especially in the era of big data and IoT.

443 citations


Journal ArticleDOI
TL;DR: By dividing the research into four main groups based on the problem-solving approaches and identifying the investigated quality of service parameters, intended objectives, and developing environments, beneficial results and statistics are obtained that can contribute to future research.
Abstract: The increasing tendency of network service users to use cloud computing encourages web service vendors to supply services that have different functional and nonfunctional (quality of service) features and provide them in a service pool. Based on supply and demand rules and because of the exuberant growth of the services that are offered, cloud service brokers face tough competition against each other in providing quality of service enhancements. Such competition leads to a difficult and complicated process to provide simple service selection and composition in supplying composite services in the cloud, which should be considered an NP-hard problem. How to select appropriate services from the service pool, overcome composition restrictions, determine the importance of different quality of service parameters, focus on the dynamic characteristics of the problem, and address rapid changes in the properties of the services and network appear to be among the most important issues that must be investigated and addressed. In this paper, utilizing a systematic literature review, important questions that can be raised about the research performed in addressing the above-mentioned problem have been extracted and put forth. Then, by dividing the research into four main groups based on the problem-solving approaches and identifying the investigated quality of service parameters, intended objectives, and developing environments, beneficial results and statistics are obtained that can contribute to future research.

367 citations


Journal ArticleDOI
TL;DR: A structured and comprehensive overview of the literature in the field of Web Data Extraction is provided, namely applications at the Enterprise level and at the Social Web level, which allows to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users.
Abstract: Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction.This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.

364 citations


Journal ArticleDOI
TL;DR: This work investigates QoS of real-world web services and provides reusable research data sets for validating various QoS-based techniques and models and releases comprehensive web service QoS data sets publicly released online.
Abstract: Quality of service (QoS) is widely employed for describing nonfunctional characteristics of web services. Although QoS of web services has been investigated intensively in the field of service computing, there is a lack of real-world web service QoS data sets for validating various QoS-based techniques and models. To investigate QoS of real-world web services and to provide reusable research data sets for future research, we conduct several large-scale evaluations on real-world web services. First, addresses of 21,358 web services are obtained from the Internet. Then, three large-scale real-world evaluations are conducted. In our evaluations, more than 30 million real-world web service invocations are conducted on web services in more than 80 countries by users from more than 30 counties. Detailed evaluation results are presented in this paper and comprehensive web service QoS data sets are publicly released online.

352 citations


Proceedings ArticleDOI
06 Mar 2014
TL;DR: An innovative Internet of Things (IoT) architecture that allows real time interaction between mobile clients and smart/legacy things (sensors and actuators) via a wireless gateway is proposed.
Abstract: This paper proposes an innovative Internet of Things (IoT) architecture that allows real time interaction between mobile clients and smart/legacy things (sensors and actuators) via a wireless gateway. The novel services provided are: (i) dynamic discovery of M2M device and endpoints by the clients, (ii) managing connection with non-smart things connected over modbus, (iii) associate metadata to sensor and actuator measurements using Sensor Markup Language (SenML) representation and (iv) extending the current capabilities of SenML to support actuator control from mobile clients. These clients are equipped with an end-user application that initiates the discovery phase to learn about the devices and endpoints (sensors and actuators) connected to the wireless gateway. Then the user can select desired sensors to receive and display sensor metadata and control actuators from the mobile device. Prototypes of the mobile application and the wireless gateway have been implemented to validate the entire architecture. The gateway is implemented using RESTful web services and currently runs in a Google App Engine. Two real life scenarios are discussed that can be implemented using the architecture. Finally overall contributions and future research scopes are summarized.

221 citations


Journal ArticleDOI
TL;DR: Molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing, is introduced and details about how the grid system efficiently delivers high-quality phylogenetic results are provided.
Abstract: We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a GARLI 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The GARLI web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the GARLI web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. (GARLI, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.)

203 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel collaborative filtering-based Web service recommender system to help users select services with optimal Quality-of-Service (QoS) performance, and achieves considerable improvement on the recommendation accuracy.
Abstract: Web services are integrated software components for the support of interoperable machine-to-machine interaction over a network. Web services have been widely employed for building service-oriented applications in both industry and academia in recent years. The number of publicly available Web services is steadily increasing on the Internet. However, this proliferation makes it hard for a user to select a proper Web service among a large amount of service candidates. An inappropriate service selection may cause many problems (e.g., ill-suited performance) to the resulting applications. In this paper, we propose a novel collaborative filtering-based Web service recommender system to help users select services with optimal Quality-of-Service (QoS) performance. Our recommender system employs the location information and QoS values to cluster users and services, and makes personalized service recommendation for users based on the clustering results. Compared with existing service recommendation methods, our approach achieves considerable improvement on the recommendation accuracy. Comprehensive experiments are conducted involving more than 1.5 million QoS records of real-world Web services to demonstrate the effectiveness of our approach.

187 citations


Book
28 Apr 2014
TL;DR: Flasks as mentioned in this paper is a micro-framework based on Python that allows developers to take full creative control of their web applications with Python-based micro-freeness. But it does not provide any development guidelines and leaves the business of extensions up to developers.
Abstract: Take full creative control of your web applications with Flask, the Python-based microframework. With this hands-on book, youll learn Flask from the ground up by developing a complete social blogging application step-by-step. Author Miguel Grinberg walks you through the frameworks core functionality, and shows you how to extend applications with advanced web techniques such as database migration and web service communication. Rather than impose development guidelines as other frameworks do, Flask leaves the business of extensions up to you. If you have Python experience, this book shows you how to take advantage of that creative freedom. Learn Flasks basic application structure and write an example app Work with must-have componentstemplates, databases, web forms, and email supportUse packages and modules to structure a large application that scales Implement user authentication, roles, and profiles Build a blogging feature by reusing templates, paginating item lists, and working with rich text Use a Flask-based RESTful API to expose app functionality to smartphones, tablets, and other third-party clients Learn how to run unit tests and enhance application performance Explore options for deploying your web app to a production server

176 citations


Journal ArticleDOI
TL;DR: A social network-based service recommendation method with trust enhancement known as RelevantTrustWalker, utilizing a matrix factorization method to assess the degree of trust between users in social network and an extended random walk algorithm to obtain recommendation results.
Abstract: Given the increasing applications of service computing and cloud computing, a large number of Web services are deployed on the Internet, triggering the research of Web service recommendation. Despite of service QoS, the use of user feedback is becoming the current trend in service recommendation. Likewise in traditional recommender systems, sparsity, cold-start and trustworthiness are major issues challenging service recommendation in adopting similarity-based approaches. Meanwhile, with the prevalence of social networks, nowadays people become active in interacting with various computers and users, resulting in a huge volume of data available, such as service information, user-service ratings, interaction logs, and user relationships. Therefore, how to incorporate the trust relationship in social networks with user feedback for service recommendation motivates this work. In this paper, we propose a social network-based service recommendation method with trust enhancement known as RelevantTrustWalker. First, a matrix factorization method is utilized to assess the degree of trust between users in social network. Next, an extended random walk algorithm is proposed to obtain recommendation results. To evaluate the accuracy of the algorithm, experiments on a real-world dataset are conducted and experimental results indicate that the quality of the recommendation and the speed of the method are improved compared with existing algorithms.

159 citations


Proceedings ArticleDOI
01 Sep 2014
TL;DR: An application called 'ECG Android App' is built which provides the end user with visualization of their Electro Cardiogram (ECG) waves and data logging functionality in the background, which consists of various technologies: IOIO microcontroller, signal processing, communication protocols, secure and efficient mechanisms for large file transfer, data base management system, and the centralized cloud.
Abstract: The focus on this paper is to build an Android platform based mobile application for the healthcare domain, which uses the idea of Internet of Things (IoT) and cloud computing. We have built an application called 'ECG Android App' which provides the end user with visualization of their Electro Cardiogram (ECG) waves and data logging functionality in the background. The logged data can be uploaded to the user's private centralized cloud or a specific medical cloud, which keeps a record of all the monitored data and can be retrieved for analysis by the medical personnel. Though the idea of building a medical application using IoT and cloud techniques is not totally new, there is a lack of empirical studies in building such a system. This paper reviews the fundamental concepts of IoT. Further, the paper presents an infrastructure for the healthcare domain, which consists of various technologies: IOIO microcontroller, signal processing, communication protocols, secure and efficient mechanisms for large file transfer, data base management system, and the centralized cloud. The paper emphasizes on the system and software architecture and design which is essential to overall IoT and cloud based medical applications. The infrastructure presented in the paper can also be applied to other healthcare domains. It concludes with recommendations and extensibilities found for the solution in the healthcare domain.

156 citations


Proceedings ArticleDOI
01 Aug 2014
TL;DR: DKPro Core as mentioned in this paper is a broad-coverage component collection integrating a wide range of third-party NLP tools and making them interoperable, which can be used for sharing pipelines with other researchers, embedding NLP pipelines in applications, and the use on high-performance computing clusters.
Abstract: Due to the diversity of natural language processing (NLP) tools and resources, combining them into processing pipelines is an important issue, and sharing these pipelines with others remains a problem. We present DKPro Core, a broad-coverage component collection integrating a wide range of third-party NLP tools and making them interoperable. Contrary to other recent endeavors that rely heavily on web services, our collection consists only of portable components distributed via a repository, making it particularly interesting with respect to sharing pipelines with other researchers, embedding NLP pipelines in applications, and the use on high-performance computing clusters. Our collection is augmented by a novel concept for automatically selecting and acquiring resources required by the components at runtime from a repository. Based on these contributions, we demonstrate a way to describe a pipeline such that all required software and resources can be automatically obtained, making it easy to share it with others, e.g. in order to reproduce results or as examples in teaching, documentation, or publications.

Journal ArticleDOI
TL;DR: This work analyses the Semantic Web of Things SWoT, presenting its different levels to offer an IoT convergence, and analyses the trends for capillary networks and for cellular networks with standards such as IPSO, ZigBee, OMA, and the oneM2M initiative.
Abstract: The Internet of Things IoT is being applied for stovepipe solutions, since it presents a semantic description limited to a specific domain. IoT needs to be pushed towards a more open, interoperable and collaborative IoT. The first step has been the Web of Things WoT. WoT evolves the IoT with a common stack based on web services. But, even when a homogeneous access is reached through web protocols, a common understanding is not yet acquired. For this purpose, the Semantic Web of Things SWoT is proposed for the integration of the semantic web on the WoT. This work analyses the SWoT, presenting its different levels to offer an IoT convergence. Specifically, we analyse the trends for capillary networks and for cellular networks with standards such as IPSO, ZigBee, OMA, and the oneM2M initiative. This work also analyses the impact of the semantic-annotations/metadata in the performance of the resources.

Journal ArticleDOI
TL;DR: A solution based on a semantically rich variability model to support the dynamic adaptation of service compositions and its possible configurations are verified at design time using Constraint Programming.

Journal ArticleDOI
TL;DR: This paper proposes a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation for tags recommendation.
Abstract: Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant services. The performance of traditional Web service description language (WSDL)-based Web service clustering is not satisfied, due to the singleness of data source. Recently, Web service search engines such as Seekda! allow users to manually annotate Web services using tags, which describe functions of Web services or provide additional contextual and semantical information. In this paper, we cluster Web services by utilizing both WSDL documents and tags. To handle the clustering performance limitation caused by uneven tag distribution and noisy tags, we propose a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation. Extensive experiments are conducted based on our real-world dataset, which consists of 15,968 Web services. The experimental results demonstrate the effectiveness of our proposed service clustering and tag recommendation strategies. Specifically, compared with traditional WSDL-based Web service clustering approaches, the proposed approach produces gains in both precision and recall for up to 14 % in most cases.

Journal ArticleDOI
TL;DR: A novel method to efficiently provide better Web-page recommendation through semantic-enhancement by integrating the domain and Web usage knowledge of a website is proposed.
Abstract: Web-page recommendation plays an important role in intelligent Web systems. Useful knowledge discovery from Web usage data and satisfactory knowledge representation for effective Web-page recommendations are crucial and challenging. This paper proposes a novel method to efficiently provide better Web-page recommendation through semantic-enhancement by integrating the domain and Web usage knowledge of a website. Two new models are proposed to represent the domain knowledge. The first model uses an ontology to represent the domain knowledge. The second model uses one automatically generated semantic network to represent domain terms, Web-pages, and the relations between them. Another new model, the conceptual prediction model, is proposed to automatically generate a semantic network of the semantic Web usage knowledge, which is the integration of domain knowledge and Web usage knowledge. A number of effective queries have been developed to query about these knowledge bases. Based on these queries, a set of recommendation strategies have been proposed to generate Web-page candidates. The recommendation results have been compared with the results obtained from an advanced existing Web Usage Mining (WUM) method. The experimental results demonstrate that the proposed method produces significantly higher performance than the WUM method.

Proceedings ArticleDOI
07 Apr 2014
TL;DR: A Temporal QoS-aware Web Service Recommendation Framework is presented to predict missing QoS value under various temporal context and a Non-negative Tensor Factorization (NTF) algorithm is proposed which is able to deal with the triadic relations of user-service-time model.
Abstract: With the rapid growth of Web Service in the past decade, the issue of QoS-aware Web service recommendation is becoming more and more critical. Since the Web service QoS information collection work requires much time and effort, and is sometimes even impractical, the service QoS value is usually missing. There are some work to predict the missing QoS value using traditional collaborative filtering methods based on user-service static model. However, the QoS value is highly related to the invocation context (e.g., QoS value are various at different time). By considering the third dynamic context information, a Temporal QoS-aware Web Service Recommendation Framework is presented to predict missing QoS value under various temporal context. Further, we formalize this problem as a generalized tensor factorization model and propose a Non-negative Tensor Factorization (NTF) algorithm which is able to deal with the triadic relations of user-service-time model. Extensive experiments are conducted based on our real-world Web service QoS dataset collected on Planet-Lab, which is comprised of service invocation response-time and throughput value from 343 users on 5817 Web services at 32 time periods. The comprehensive experimental analysis shows that our approach achieves better prediction accuracy than other approaches.

Journal ArticleDOI
TL;DR: This work describes how UCSF Chimera is enhanced, a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services, and illustrates its use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures.
Abstract: Integrating access to web services with desktop applications allows for an expanded set of application features, including performing computationally intensive tasks and convenient searches of databases. We describe how we have enhanced UCSF Chimera (http://www.rbvi.ucsf.edu/chimera/), a program for the interactive visualization and analysis of molecular structures and related data, through the addition of several web services (http://www.rbvi.ucsf.edu/chimera/docs/webservices.html). By streamlining access to web services, including the entire job submission, monitoring and retrieval process, Chimera makes it simpler for users to focus on their science projects rather than data manipulation. Chimera uses Opal, a toolkit for wrapping scientific applications as web services, to provide scalable and transparent access to several popular software packages. We illustrate Chimera's use of web services with an example workflow that interleaves use of these services with interactive manipulation of molecular sequences and structures, and we provide an example Python program to demonstrate how easily Opal-based web services can be accessed from within an application. Web server availability: http://webservices.rbvi.ucsf.edu/opal2/dashboard?command=serviceList.

Journal ArticleDOI
TL;DR: This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities and describing the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis.
Abstract: Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis.

Journal ArticleDOI
TL;DR: This paper presents a building automation system adopting SOA paradigm with devices implemented by device profile for web service (DPWS) in which context information is collected, processed, and sent to a composition engine to coordinate appropriate devices/services based on the context, composition plan, and predefined policy rules.
Abstract: Service-oriented architecture (SOA) is realized by independent, standardized, and self-describing units known as services. This architecture has been widely used and verified for automatic, dynamic, and self-configuring distributed systems such as in building automation. This paper presents a building automation system adopting SOA paradigm with devices implemented by device profile for web service (DPWS) in which context information is collected, processed, and sent to a composition engine to coordinate appropriate devices/services based on the context, composition plan, and predefined policy rules. A six-phased composition process is proposed to carry out the task. In addition, two other components are designed to support the composition process: building ontology as a schema for representing semantic data and composition plan description language to describe context-based composite services in form of composition plans. A prototype consisting of a DPWSim simulator and SamBAS is developed to illustrate and test the proposed idea. Comparison analysis and experimental results imply the feasibility and scalability of the system.

Journal ArticleDOI
TL;DR: A method of software component reuse as a model (or methodology), which facilitates the semi-automatic reuse of web services on a cloud computing environment, leading to business process composition.
Abstract: This paper proposes a novel model for automatic construction of business processes called IPCASCI (Intelligent business Processes Composition based on multi-Agent systems, Semantics and Cloud Integration). The software development industry requires agile construction of new products able to adapt to the emerging needs of a changing market. In this context, we present a method of software component reuse as a model (or methodology), which facilitates the semi-automatic reuse of web services on a cloud computing environment, leading to business process composition. The proposal is based on web service technology, including: (i) Automatic discovery of web services; (ii) Semantics description of web services; (iii) Automatic composition of existing web services to generate new ones; (iv) Automatic invocation of web services. As a result of this proposal, we have presented its implementation (as a tool) on a real case study. The evaluation of the case study and its results are proof of the reliability of IPCASCI.

Journal ArticleDOI
TL;DR: A semantically-enhanced platform that will assist in the process of discovering the cloud services that best match user needs and outperform state-of-the-art solutions in similarly broad domains is presented.
Abstract: Cloud computing is a technological paradigm that permits computing services to be offered over the Internet. This new service model is closely related to previous well-known distributed computing initiatives such as Web services and grid computing. In the current socio-economic climate, the affordability of cloud computing has made it one of the most popular recent innovations. This has led to the availability of more and more cloud services, as a consequence of which it is becoming increasingly difficult for service consumers to find and access those cloud services that fulfil their requirements. In this paper, we present a semantically-enhanced platform that will assist in the process of discovering the cloud services that best match user needs. This fully-fledged system encompasses two basic functions: the creation of a repository with the semantic description of cloud services and the search for services that accomplish the required expectations. The cloud service's semantic repository is generated by means of an automatic tool that first annotates the cloud service descriptions with semantic content and then creates a semantic vector for each service. The comprehensive evaluation of the tool in the ICT domain has led to very promising results that outperform state-of-the-art solutions in similarly broad domains.

Journal ArticleDOI
TL;DR: InterMine is a biological data warehousing system providing extensive automatically generated and configurable RESTful web services that underpin the web interface and can be re-used in many other applications.
Abstract: InterMine (www.intermine.org) is a biological data warehousing system providing extensive automatically generated and configurable RESTful web services that underpin the web interface and can be re-used in many other applications: to find and filter data; export it in a flexible and structured way; to upload, use, manipulate and analyze lists; to provide services for flexible retrieval of sequence segments, and for other statistical and analysis tools. Here we describe these features and discuss how they can be used separately or in combinations to support integrative and comparative analysis.

Proceedings ArticleDOI
29 Dec 2014
TL;DR: The paper defines the security needs proposing a federated model to design an architecture for secure exchange of services in IoT paradigm and proposes an approach addressed to overcome the conventional security solutions and deploy a Federated architecture for dynamic prevention, detection, diagnosis, isolation, and countermeasures against cyber attacks.
Abstract: Internet of Things (IoT) refers to the capability to connect, communicate and remotely manage a large number of networked, automated devices via the Internet. IoT is becoming as part of daily life and aims to extend pervasive communication and networking anytime, anywhere with any device. In this context security requirements and architectures must be properly formulated, implemented in order to enforce the security policies during their life-cycle. This paper provides a survey and analysis of security in the area of IoT introducing an approach addressed to overcome the conventional security solutions and deploy a federated architecture for dynamic prevention, detection, diagnosis, isolation, and countermeasures against cyber attacks. Based on the analysis of the most common web services, the paper defines the security needs proposing a federated model to design an architecture for secure exchange of services in IoT paradigm.

Journal ArticleDOI
TL;DR: The results of a structured literature survey of Semantic Web technologies in DSS are presented, together with the results of interviews with DSS practitioners, to provide an overview of current research as well as open research areas, trends and new directions.
Abstract: The Semantic Web shares many goals with Decision Support Systems DSS, e.g., being able to precisely interpret information, in order to deliver relevant, reliable and accurate information to a user when and where it is needed. DSS have in addition more specific goals, since the information need is targeted towards making a particular decision, e.g., making a plan or reacting to a certain situation. When surveying DSS literature, we discover applications ranging from Business Intelligence, via general purpose social networking and collaboration support, Information Retrieval and Knowledge Management, to situation awareness, emergency management, and simulation systems. The unifying element is primarily the purpose of the systems, and their focus on information management and provision, rather than the specific technologies they employ to reach these goals. Semantic Web technologies have been used in DSS during the past decade to solve a number of different tasks, such as information integration and sharing, web service annotation and discovery, and knowledge representation and reasoning. In this survey article, we present the results of a structured literature survey of Semantic Web technologies in DSS, together with the results of interviews with DSS researchers and developers both in industry and research organizations outside the university. The literature survey has been conducted using a structured method, where papers are selected from the publisher databases of some of the most prominent conferences and journals in both fields Semantic Web and DSS, based on sets of relevant keywords representing the intersection of the two fields. Our main contribution is to analyze the landscape of semantic technologies in DSS, and provide an overview of current research as well as open research areas, trends and new directions. An added value is the conclusions drawn from interviews with DSS practitioners, which give an additional perspective on the potential of Semantic Web technologies in this field; including scenarios for DSS, and requirements for Semantic Web technologies that may attempt to support those scenarios.

Posted Content
TL;DR: XRay is developed, the first fine-grained, robust, and scalable personal data tracking system for the Web, which achieves high precision and recall by correlating data from a surprisingly small number of extra accounts.
Abstract: Today's Web services - such as Google, Amazon, and Facebook - leverage user data for varied purposes, including personalizing recommendations, targeting advertisements, and adjusting prices. At present, users have little insight into how their data is being used. Hence, they cannot make informed choices about the services they choose. To increase transparency, we developed XRay, the first fine-grained, robust, and scalable personal data tracking system for the Web. XRay predicts which data in an arbitrary Web account (such as emails, searches, or viewed products) is being used to target which outputs (such as ads, recommended products, or prices). XRay's core functions are service agnostic and easy to instantiate for new services, and they can track data within and across services. To make predictions independent of the audited service, XRay relies on the following insight: by comparing outputs from different accounts with similar, but not identical, subsets of data, one can pinpoint targeting through correlation. We show both theoretically, and through experiments on Gmail, Amazon, and YouTube, that XRay achieves high precision and recall by correlating data from a surprisingly small number of extra accounts.

Journal ArticleDOI
TL;DR: A dedicated modeling language and an application are presented, showing first how it is possible to ease the modeling process and second how the semantic gap between modeling logic and the domain can be reduced, by means of vertical multiformalism modeling.

Journal ArticleDOI
TL;DR: This paper proposes a novel collaborative location-based regularization framework (Colbar) to address the problem of personalized QoS prediction, which shows that the prediction accuracy of Colbar outperforms other state-of-the-art approaches in various criteria.

Proceedings ArticleDOI
27 Jun 2014
TL;DR: This paper designs a location-based hierarchical matrix factorization (HMF) method to perform personalized QoS prediction, whereby effective service recommendation can be made and results show that the HMF method achieves higher prediction accuracy than the state-of-the-art methods.
Abstract: Web service recommendation is of great importance when users face a large number of functionally-equivalent candidate services. To recommend Web services that best fit a user's need, QoS values which characterize the non-functional properties of those candidate services are in demand. But in reality, the QoS information of Web service is not easy to obtain, because only limited historical invocation records exist. To tackle this challenge, in recent literature, a number of QoS prediction methods are proposed, but they still demonstrate disadvantages on prediction accuracy. In this paper, we design a location-based hierarchical matrix factorization (HMF) method to perform personalized QoS prediction, whereby effective service recommendation can be made. We cluster users and services into several user-service groups based on their location information, each of which contains a small set of users and services. To better characterize the QoS data, our HMF model is trained in a hierarchical way by using the global QoS matrix as well as several location-based local QoS matrices generated from user-service clusters. Then the missing QoS values can be predicted by compactly combining the results from local matrix factorization and global matrix factorization. Comprehensive experiments are conducted on a real-world Web service QoS dataset with 1,974,675 real Web service invocation records. The experimental results show that our HMF method achieves higher prediction accuracy than the state-of-the-art methods.

Book ChapterDOI
01 Jan 2014
TL;DR: A comprehensive review of the existing proposals for service selection, and a comparative analysis of the optimization and automated negotiation-based approaches are provided.
Abstract: Web service composition (WSC) offers a range of solutions for rapid creation of complex applications in advanced service-oriented systems by facilitating the composition of already existing concrete web services. One critical challenge in WSC is the dynamic selection of concrete services to be bound to the abstract composite service. In this paper, we provide a comprehensive review of the existing proposals for service selection, and a comparative analysis of the optimization and automated negotiation-based approaches.

Journal ArticleDOI
TL;DR: This article describes recent developments of Europe PMC, the leading database for life science literature, which now offers RESTful web services to access both articles and grants, powerful search tools such as citation-count sort order and data citation features, a service to add publications to your ORCID, a variety of export formats.
Abstract: This article describes recent developments of Europe PMC (http://europepmc.org), the leading database for life science literature. Formerly known as UKPMC, the service was rebranded in November 2012 as Europe PMC to reflect the scope of the funding agencies that support it. Several new developments have enriched Europe PMC considerably since then. Europe PMC now offers RESTful web services to access both articles and grants, powerful search tools such as citation-count sort order and data citation features, a service to add publications to your ORCID, a variety of export formats, and an External Links service that enables any related resource to be linked from Europe PMC content.