scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2010"


Proceedings ArticleDOI
26 Apr 2010
TL;DR: This work model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.
Abstract: Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation.In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5% click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.

2,467 citations


Journal ArticleDOI
TL;DR: A new framework aimed at both novice as well as expert users that exposes novel methods of obtaining annotations and visualizing sequence analysis results through one uniform and consistent interface is presented.
Abstract: The EMBL-EBI provides access to various mainstream sequence analysis applications. These include sequence similarity search services such as BLAST, FASTA, InterProScan and multiple sequence alignment tools such as ClustalW, T-Coffee and MUSCLE. Through the sequence similarity search services, the users can search mainstream sequence databases such as EMBL-Bank and UniProt, and more than 2000 completed genomes and proteomes. We present here a new framework aimed at both novice as well as expert users that exposes novel methods of obtaining annotations and visualizing sequence analysis results through one uniform and consistent interface. These services are available over the web and via Web Services interfaces for users who require systematic access or want to interface with customized pipe-lines and workflows using common programming languages. The framework features novel result visualizations and integration of domain and functional predictions for protein database searches. It is available at http://www.ebi.ac.uk/Tools/sss for sequence similarity searches and at http://www.ebi.ac.uk/Tools/msa for multiple sequence alignments.

1,768 citations


Journal ArticleDOI
TL;DR: Galaxy is a software system that provides informatics support through a framework that gives experimentalists simple interfaces to powerful tools, while automatically managing the computational details.
Abstract: High-throughput data production has revolutionized molecular biology. However, massive increases in data generation capacity require analysis approaches that are more sophisticated, and often very computationally intensive. Thus, making sense of high-throughput data requires informatics support. Galaxy (http://galaxyproject.org) is a software system that provides this support through a framework that gives experimentalists simple interfaces to powerful tools, while automatically managing the computational details. Galaxy is distributed both as a publicly available Web service, which provides tools for the analysis of genomic, comparative genomic, and functional genomic data, or a downloadable package that can be deployed in individual laboratories. Either way, it allows experimentalists without informatics or programming expertise to perform complex large-scale analysis with just a Web browser.

1,501 citations


Journal ArticleDOI
TL;DR: Cytoscape Web is a web-based network visualization tool–modeled after Cytoscape–which is open source, interactive, customizable and easily integrated into web sites.
Abstract: Summary: Cytoscape Web is a web-based network visualization tool–modeled after Cytoscape–which is open source, interactive, customizable and easily integrated into web sites. Multiple file exchange formats can be used to load data into Cytoscape Web, including GraphML, XGMML and SIF. Availability and Implementation: Cytoscape Web is implemented in Flex/ActionScript with a JavaScript API and is freely available at http://cytoscapeweb.cytoscape.org/ Contact: gary.bader@utoronto.ca Supplementary information:Supplementary data are available at Bioinformatics online.

687 citations


Journal ArticleDOI
TL;DR: A process and a suitable system architecture is proposed that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real- world business applications.
Abstract: The increasing usage of smart embedded devices in business blurs the line between the virtual and real worlds. This creates new opportunities to build applications that better integrate real-time state of the physical world, and hence, provides enterprise services that are highly dynamic, more diverse, and efficient. Service-Oriented Architecture (SOA) approaches traditionally used to couple functionality of heavyweight corporate IT systems, are becoming applicable to embedded real-world devices, i.e., objects of the physical world that feature embedded processing and communication. In such infrastructures, composed of large numbers of networked, resource-limited devices, the discovery of services and on-demand provisioning of missing functionality is a significant challenge. We propose a process and a suitable system architecture that enables developers and business process designers to dynamically query, select, and use running instances of real-world services (i.e., services running on physical devices) or even deploy new ones on-demand, all in the context of composite, real-world business applications.

637 citations


Book
01 Jan 2010
TL;DR: This paper presents a meta-answering of the principles of Service-Oriented Computing with a focus on how to model and manage engagement in the context of web services.
Abstract: About the Authors.Preface.Note to the Reader.Acknowledgments.Figures.Tables.Listings.I Basics.1. Computing with Services.2. Basic Standards for Web Services.3. Programming Web Services.4. Enterprise Architectures.5. Principles of Service-Oriented Computing.II Description.6. Modeling and Representation.7. Resource Description Framework.8. Web Ontology Language.9. Ontology Management.III Engagement.10. Execution Models.11. Transaction Concepts.12. Coordination Frameworks for Web Services.13. Process Specifications.14. Formal Specification and Enactment.IV Collaboration.15. Agents.16. Multiagent Systems.17. Organizations.18. Communication.V Solutions.19. Semantic Service Solutions.20. Social Service Selection.21. Economic Service Selection.VI Engineering.22. Building SOC Applications.23. Service Management.24. Security.VII Directions.25. Challenge and Extensions.VIII Appendices.Appendix A: XML and XML Schema.Appendix B: URI, URN, URL and UUID.Appendix C: XML Namespace Abbreviations.Glossary.About the Authors.Bibliography.Index.

630 citations


Proceedings ArticleDOI
20 Apr 2010
TL;DR: This paper investigates three possible distributed solutions proposed for load balancing; approaches inspired by Honeybee Foraging Behaviour, Biased Random Sampling and Active Clustering.
Abstract: The anticipated uptake of Cloud computing, built on well-established research in Web Services, networks, utility computing, distributed computing and virtualisation, will bring many advantages in cost, flexibility and availability for service users. These benefits are expected to further drive the demand for Cloud services, increasing both the Cloud's customer base and the scale of Cloud installations. This has implications for many technical issues in Service Oriented Architectures and Internet of Services (IoS)-type applications; including fault tolerance, high availability and scalability. Central to these issues is the establishment of effective load balancing techniques. It is clear the scale and complexity of these systems makes centralized assignment of jobs to specific servers infeasible; requiring an effective distributed solution. This paper investigates three possible distributed solutions proposed for load balancing; approaches inspired by Honeybee Foraging Behaviour, Biased Random Sampling and Active Clustering.

510 citations


Proceedings ArticleDOI
30 Dec 2010
TL;DR: This paper describes the Web of Things architecture and best-practices based on the RESTful principles that have already contributed to the popular success, scalability, and modularity of the traditional Web, and discusses several prototypes designed in accordance with these principles.
Abstract: Many efforts are centered around creating large-scale networks of “smart things” found in the physical world (e.g., wireless sensor and actuator networks, embedded devices, tagged objects). Rather than exposing real-world data and functionality through proprietary and tightly-coupled systems, we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. Popular Web languages (e.g., HTML, Python, JavaScript, PHP) can be used to easily build applications involving smart things and users can leverage well-known Web mechanisms (e.g., browsing, searching, bookmarking, caching, linking) to interact and share these devices. In this paper, we begin by describing the Web of Things architecture and best-practices based on the RESTful principles that have already contributed to the popular success, scalability, and modularity of the traditional Web. We then discuss several prototypes designed in accordance with these principles to connect environmental sensor nodes and an energy monitoring system to the World Wide Web. We finally show how Web-enabled smart things can be used in lightweight ad-hoc applications called “physical mashups”.

492 citations


Proceedings ArticleDOI
26 Sep 2010
TL;DR: A range of different profiling and recommendation strategies are evaluated, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.
Abstract: Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.

486 citations


Proceedings ArticleDOI
26 Apr 2010
TL;DR: This paper proposes an approach based on the notion of skyline to effectively and efficiently select services for composition, reducing the number of candidate services to be considered, and discusses how a provider can improve its service to become more competitive and increase its potential of being included in composite applications.
Abstract: Web service composition enables seamless and dynamic integration of business applications on the web. The performance of the composed application is determined by the performance of the involved web services. Therefore, non-functional, quality of service aspects are crucial for selecting the web services to take part in the composition. Identifying the best candidate web services from a set of functionally-equivalent services is a multi-criteria decision making problem. The selected services should optimize the overall QoS of the composed application, while satisfying all the constraints specified by the client on individual QoS parameters. In this paper, we propose an approach based on the notion of skyline to effectively and efficiently select services for composition, reducing the number of candidate services to be considered. We also discuss how a provider can improve its service to become more competitive and increase its potential of being included in composite applications. We evaluate our approach experimentally using both real and synthetically generated datasets.

479 citations


Proceedings Article
23 Aug 2010
TL;DR: LTP (Language Technology Platform) is an integrated Chinese processing platform which includes a suite of high performance natural language processing modules and relevant corpora that achieved good results in some relevant evaluations, such as CoNLL and SemEval.
Abstract: LTP (Language Technology Platform) is an integrated Chinese processing platform which includes a suite of high performance natural language processing (NLP) modules and relevant corpora. Especially for the syntactic and semantic parsing modules, we achieved good results in some relevant evaluations, such as CoNLL and SemEval. Based on XML internal data representation, users can easily use these modules and corpora by invoking DLL (Dynamic Link Library) or Web service APIs (Application Program Interface), and view the processing results directly by the visualization tool.

Proceedings ArticleDOI
16 May 2010
TL;DR: It is found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search, suggesting the scope of the problem seems industry-wide.
Abstract: With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.

Journal ArticleDOI
01 Sep 2010
TL;DR: This paper proposes new machine learning techniques to annotate table cells with entities that they likely mention, table columns with types from which entities are drawn for cells in the column, and relations that pairs of table columns seek to express, and a new graphical model for making all these labeling decisions for each table simultaneously.
Abstract: Tables are a universal idiom to present relational data. Billions of tables on Web pages express entity references, attributes and relationships. This representation of relational world knowledge is usually considerably better than completely unstructured, free-format text. At the same time, unlike manually-created knowledge bases, relational information mined from "organic" Web tables need not be constrained by availability of precious editorial time. Unfortunately, in the absence of any formal, uniform schema imposed on Web tables, Web search cannot take advantage of these high-quality sources of relational information. In this paper we propose new machine learning techniques to annotate table cells with entities that they likely mention, table columns with types from which entities are drawn for cells in the column, and relations that pairs of table columns seek to express. We propose a new graphical model for making all these labeling decisions for each table simultaneously, rather than make separate local decisions for entities, types and relations. Experiments using the YAGO catalog, DB-Pedia, tables from Wikipedia, and over 25 million HTML tables from a 500 million page Web crawl uniformly show the superiority of our approach. We also evaluate the impact of better annotations on a prototype relational Web search tool. We demonstrate clear benefits of our annotations beyond indexing tables in a purely textual manner.

Journal ArticleDOI
TL;DR: One of the major findings of this study is that although less perceived risk may lead to a favorable perception of web service quality, it does not necessarily translate to customer satisfaction, or positive behavioral intentions.

Proceedings ArticleDOI
TL;DR: In this article, the authors model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article selection strategy based on user-click feedback to maximize total user clicks.
Abstract: Personalized web services strive to adapt their services (advertisements, news articles, etc) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5% click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.

Proceedings ArticleDOI
05 Jul 2010
TL;DR: This work conducts several large-scale evaluations on real-world Web services of quality-of-Service (QoS) performance and provides reusable research datasets for promoting the research of QoS-driven Web services.
Abstract: Quality-of-Service (QoS) is widely employed for describing non-functional characteristics of Web services. Although QoS of Web services has been investigated in a lot of previous works, there is a lack of real-world Web service QoS datasets for validating new QoS based techniques and models of Web services. To study the performance of real-world Web services as well as provide reusable research datasets for promoting the research of QoS-driven Web services, we conduct several large-scale evaluations on real-world Web services. Firstly, addresses of 21,358 Web services are obtained from the Internet. Then, invocation failure probability performance of 150 Web services is assessed by 100 distributed service users. After that, response time and throughput performance of 5,825 Web services are evaluated by 339 distributed service users. Detailed experimental results are presented in this paper and comprehensive Web service QoS datasets are publicly released for future research.

Journal ArticleDOI
TL;DR: This paper addresses the issue of selecting and composing Web services not only according to their functional requirements but also to their transactional properties and QoS characteristics by proposing a selection algorithm that satisfies user's preferences as weights over QoS criteria and as risk levels defining semantically the transactional requirements.
Abstract: Web Services are the most famous implementation of service-oriented architectures that has brought some challenging research issues. One of these is the composition, i.e., the capability to recursively construct a composite Web service as a workflow of other existing Web services, which are developed by different organizations and offer diverse functionalities (e.g., ticket purchase, payment), transactional properties (e.g., compensatable or not), and Quality of Service (QoS) values (e.g., execution price, success rate). The selection of a Web service, for each activity of the workflow, meeting the user's requirements, is still an important challenge. Indeed, the selection of one Web service among a set of them that fulfill some functionalities is a critical task, generally depending on a combined evaluation of QoS. However, the conventional QoS-aware composition approaches do not consider the transactional constraints during the composition process. This paper addresses the issue of selecting and composing Web services not only according to their functional requirements but also to their transactional properties and QoS characteristics. We propose a selection algorithm that satisfies user's preferences, expressed as weights over QoS criteria and as risk levels defining semantically the transactional requirements. Proofs and experimental results are presented.

Journal ArticleDOI
TL;DR: MyExperiment is an online research environment that supports the social sharing of bioinformatics workflows consisting of a series of computational tasks using web services, which may be performed on data from its retrieval, integration and analysis, to the visualization of the results.
Abstract: myExperiment (http://www.myexperiment.org) is an online research environment that supports the social sharing of bioinformatics workflows. These workflows are procedures consisting of a series of computational tasks using web services performed on data from its retrieval, integration and analysis, to the visualisation of the results. As a public repository of workflows, myExperiment allows anybody to discover those that are relevant to their research which can then be reused and repurposed to their specific requirements. Conversely, developers can submit their workflows to myExperiment and enable them to be shared in a secure manner. Since its release in 2007, myExperiment currently has over 3500 registered users and contains more than 900 workflows. The social aspect to the sharing of these workflows is facilitated by registered users forming virtual communities bound together by a common interest or research project. Contributors of workflows can build their reputation within these communities by receiving feedback and credit from individuals who reuse their work. Further documentation about myExperiment including its REST web service is available from http://wiki.myexperiment.org. Feedback and requests for support can be sent to bugs@myexperiment.org.

Journal ArticleDOI
TL;DR: A novel vision-based approach that is Web-page-programming-language-independent is proposed that primarily utilizes the visual features on the deep Web pages to implement deep Web data extraction, including data record extraction and data item extraction.
Abstract: Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages (they will be called deep Web pages in this paper). Extracting structured data from deep Web pages is a challenging problem due to the underlying intricate structures of such pages. Until now, a large number of techniques have been proposed to address this problem, but all of them have inherent limitations because they are Web-page-programming-language-dependent. As the popular two-dimensional media, the contents on Web pages are always displayed regularly for users to browse. This motivates us to seek a different way for deep Web data extraction to overcome the limitations of previous works by utilizing some interesting common visual features on the deep Web pages. In this paper, a novel vision-based approach that is Web-page-programming-language-independent is proposed. This approach primarily utilizes the visual features on the deep Web pages to implement deep Web data extraction, including data record extraction and data item extraction. We also propose a new evaluation measure revision to capture the amount of human effort needed to produce perfect extraction. Our experiments on a large set of Web databases show that the proposed vision-based approach is highly effective for deep Web data extraction.

Journal ArticleDOI
TL;DR: This article proposes the application of the Resources Via Web Services framework (RVWS) to offer higher level abstraction of clouds in the form of a new technology that makes possible the provision of service publication, discovery and selection based on dynamic attributes which express the current state and characteristics of cloud services and resources.

Proceedings ArticleDOI
01 May 2010
TL;DR: A collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations, is proposed.
Abstract: Service-oriented architecture (SOA) is becoming a major software framework for building complex distributed systems. Reliability of the service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet. Designing effective and accurate reliability prediction approaches for the service-oriented systems has become an important research issue. In this paper, we propose a collaborative reliability prediction approach, which employs the past failure data of other similar users to predict the Web service reliability for the current user, without requiring real-world Web service invocations. We also present a user-collaborative failure data sharing mechanism and a reliability composition model for the service-oriented systems. Large-scale real-world experiments are conducted and the experimental results show that our collaborative reliability prediction approach obtains better reliability prediction accuracy than other approaches.

Patent
29 Nov 2010
TL;DR: In this paper, mobile devices enabled to support resolution-independent scalable display of Internet (Web) content to allow Web pages to be scaled (zoomed) and panned for better viewing on smaller screen sizes.
Abstract: Mobile devices enabled to support resolution-independent scalable display of Internet (Web) content to allow Web pages to be scaled (zoomed) and panned for better viewing on smaller screen sizes. The mobile devices employ software-based processing of original Web content, including HTML-based content, XML, cascade style sheets, etc. to generate scalable content. The scalable content and/or data derived therefrom are then employed to enable the Web content to be rapidly rendered, zoomed, and panned. Display lists may also be employed to provide further enhancements in rendering speed. Context zooms, including tap-based zooms on columns, images, and paragraphs are also enabled.

Proceedings ArticleDOI
05 Jul 2010
TL;DR: Experimental results demonstrate that apart from being highly scalable, RegionKNN provides considerable improvement on the recommendation accuracy by comparing with other well-known collaborative filtering algorithms.
Abstract: Several approaches to web service selection and recommendation via collaborative filtering have been studied, but seldom have these studies considered the difference between web service recommendation and product recommendation used in e-commerce sites. In this paper, we present RegionKNN, a novel hybrid collaborative filtering algorithm that is designed for large scale web service recommendation. Different from other approaches, this method employs the characteristics of QoS by building an efficient region model. Based on this model, web service recommendations will be generated quickly by using modified memory-based collaborative filtering algorithm. Experimental results demonstrate that apart from being highly scalable, RegionKNN provides considerable improvement on the recommendation accuracy by comparing with other well-known collaborative filtering algorithms.

Journal ArticleDOI
TL;DR: An overview of the web architecture, its core REST concepts, and the current state of the art in web services is given and a fresh approach to a web application transfer protocol and efficient payload encoding are introduced.
Abstract: The Internet of Things is the next big possibility and challenge for the Internet. Extending the web architecture to this new domain of constrained wireless networks and devices will be key to achieving the flexibility and scalability needed to make it a success. Web services have proven to be indispensable in creating interoperable communications between machines on today?s Internet, but at the same time the overhead and complexity of web service technology such as SOAP, XML, and HTTP are too high for use in the constrained environments often found in machine-to-machine applications (e.g., energy monitoring, building automation, and asset management). This article first gives an overview of the web architecture, its core REST concepts, and the current state of the art in web services. Two key activities required in order to achieve efficient embedded web services are introduced: a fresh approach to a web application transfer protocol and efficient payload encoding. The article analyzes the most promising payload encoding techniques and introduces the new IETF Constrained RESTful Environments (CoRE) standardization activity.

Proceedings ArticleDOI
05 Jul 2010
TL;DR: This paper proposes a novel technique to mine Web Service Description Language (WSDL) documents and cluster them into functionally similar Web service groups, as a predecessor step to retrieving the relevant Web services for a user request by search engines.
Abstract: The increasing use of the Web for everyday tasks is making Web services an essential part of the Internet customer's daily life. Users query the Internet for a required Web service and get back a set of Web services that may or may not satisfy their request. To get the most relevant Web services that fulfill the user's request, the user has to construct the request using the keywords that best describe the user's objective and match correctly with the Web Service name or location. Clustering Web services based on function similarities would greatly boost the ability of Web services search engines to retrieve the most relevant Web services. This paper proposes a novel technique to mine Web Service Description Language (WSDL) documents and cluster them into functionally similar Web service groups. The application of our approach to real Web services description files has shown good performance for clustering Web services based on function similarity, as a predecessor step to retrieving the relevant Web services for a user request by search engines.

Proceedings ArticleDOI
17 Jul 2010
TL;DR: A formal model of web security based on an abstraction of the web platform is proposed and this model is used to analyze the security of several sample web mechanisms and applications and identifies three distinct threat models.
Abstract: We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and/or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.

Journal ArticleDOI
TL;DR: The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences, but their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult.
Abstract: The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.

Proceedings ArticleDOI
06 Dec 2010
TL;DR: The efficacy of Cujo is demonstrated, where it detects 94% of the drive-by downloads with few false alarms and a median run-time of 500 ms per web page---a quality that has not been attained in previous work on detection of drive- by-download attacks.
Abstract: The JavaScript language is a core component of active and dynamic web content in the Internet today. Besides its great success in enhancing web applications, however, JavaScript provides the basis for so-called drive-by downloads---attacks exploiting vulnerabilities in web browsers and their extensions for unnoticeably downloading malicious software. Due to the diversity and frequent use of obfuscation in these attacks, static code analysis is largely ineffective in practice. While dynamic analysis and honeypots provide means to identify drive-by-download attacks, current approaches induce a significant overhead which renders immediate prevention of attacks intractable.In this paper, we present Cujo, a system for automatic detection and prevention of drive-by-download attacks. Embedded in a web proxy, Cujo transparently inspects web pages and blocks delivery of malicious JavaScript code. Static and dynamic code features are extracted on-the-fly and analysed for malicious patterns using efficient techniques of machine learning. We demonstrate the efficacy of Cujo in different experiments, where it detects 94% of the drive-by downloads with few false alarms and a median run-time of 500 ms per web page---a quality that, to the best of our knowledge, has not been attained in previous work on detection of drive-by-download attacks.

Journal ArticleDOI
TL;DR: The overall architecture and the features of the SC Collaborator system, a service oriented, web-based system that facilitates the flexible coordination of construction supply chains by leveraging web services, web portal, and open source technologies, are described.

Patent
08 Dec 2010
TL;DR: In this paper, the authors propose a framework that allows a number of software application agents to be stacked on top of an instant messenger application, each of the agents establishes a connection with a third-party Web service on the Internet or a local application in the user's computer.
Abstract: The invention provides a framework that allows a number of software application agents to be stacked on top of an instant messenger application. Each of the software application agents establishes a connection with a third-party Web service on the Internet or a local application in the user's computer. The user can share one or more third-party services or applications with other user(s) in an instant messaging session through the application agents.