scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2013"


Journal ArticleDOI
TL;DR: Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces, which allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows.
Abstract: Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

1,562 citations


Journal ArticleDOI
TL;DR: The PSIPRED Protein Analysis Workbench unites all of the previously available analysis methods into a single web-based framework and provides a greatly streamlined user interface with a number of new features to allow users to better explore their results.
Abstract: Here, we present the new UCL Bioinformatics Group’s PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

1,287 citations


Journal ArticleDOI
TL;DR: An update to the taverna tool suite is provided, highlighting new features and developments in the workbench and the Taverna Server.
Abstract: The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud environments), using the Taverna Server. In bioinformatics, Taverna workflows are typically used in the areas of high-throughput omics analyses (for example, proteomics or transcriptomics), or for evidence gathering methods involving text mining or data mining. Through Taverna, scientists have access to several thousand different tools and resources that are freely available from a large range of life science institutions. Once constructed, the workflows are reusable, executable bioinformatics protocols that can be shared, reused and repurposed. A repository of public workflows is available at http://www.myexperiment.org. This article provides an update to the Taverna tool suite, highlighting new features and developments in the workbench and the Taverna Server.

724 citations


Book
01 Jan 2013
TL;DR: Offering the the first theoretical and historical account of software for media authoring and its effects on the practice and the very concept of 'media,' Lev Manovich develops his own theory for this rapidly-growing, always-changing field.
Abstract: Software has replaced a diverse array of physical, mechanical, and electronic technologies used before 21st century to create, store, distribute and interact with cultural artifacts. It has become our interface to the world, to others, to our memory and our imagination - a universal language through which the world speaks, and a universal engine on which the world runs. What electricity and combustion engine were to the early 20th century, software is to the early 21st century. Offering the the first theoretical and historical account of software for media authoring and its effects on the practice and the very concept of 'media,' the author of The Language of New Media (2001) develops his own theory for this rapidly-growing, always-changing field.What was the thinking and motivations of people who in the 1960 and 1970s created concepts and practical techniques that underlie contemporary media software such as Photoshop, Illustrator, Maya, Final Cut and After Effects? How do their interfaces and tools shape the visual aesthetics of contemporary media and design? What happens to the idea of a 'medium' after previously media-specific tools have been simulated and extended in software? Is it still meaningful to talk about different mediums at all? Lev Manovich answers these questions and supports his theoretical arguments by detailed analysis of key media applications such as Photoshop and After Effects, popular web services such as Google Earth, and the projects in motion graphics, interactive environments, graphic design and architecture. Software Takes Command is a must for all practicing designers and media artists and scholars concerned with contemporary media.

507 citations


Patent
29 Apr 2013
TL;DR: In this paper, the authors provided mechanisms and methods for publicly providing web content of a tenant using a multi-tenant on-demand database service, which can allow the web content to be published by the tenant using the MNO database service for use by non-tenants of the MO database service.
Abstract: In accordance with embodiments, there are provided mechanisms and methods for publicly providing web content of a tenant using a multi-tenant on-demand database service. These mechanisms and methods for publicly providing web content of a tenant using a multi-tenant on-demand database service can allow the web content to be published by a tenant using the multi-tenant on-demand database service for use by non-tenants of the multi-tenant on-demand database service.

496 citations


Journal ArticleDOI
TL;DR: This paper proposes a collaborative quality-of-service (QoS) prediction approach for web services by taking advantages of the past web service usage experiences of service users, and achieves higher prediction accuracy than other approaches.
Abstract: With the increasing presence and adoption of web services on the World Wide Web, the demand of efficient web service quality evaluation approaches is becoming unprecedentedly strong. To avoid the expensive and time-consuming web service invocations, this paper proposes a collaborative quality-of-service (QoS) prediction approach for web services by taking advantages of the past web service usage experiences of service users. We first apply the concept of user-collaboration for the web service QoS information sharing. Then, based on the collected QoS data, a neighborhood-integrated approach is designed for personalized web service QoS value prediction. To validate our approach, large-scale real-world experiments are conducted, which include 1,974,675 web service invocations from 339 service users on 5,825 real-world web services. The comprehensive experimental studies show that our proposed approach achieves higher prediction accuracy than other approaches. The public release of our web service QoS data set provides valuable real-world data for future research.

408 citations


Journal ArticleDOI
TL;DR: The Wikipedia Miner toolkit is introduced, an open-source software system that allows researchers and developers to integrate Wikipedia's rich semantics into their own applications, and creates databases that contain summarized versions of Wikipedia's content and structure.

382 citations


Journal ArticleDOI
TL;DR: The fundamental research challenges in this field including communication reliability and timeliness, QoS support, data management services, and autonomic behaviors are introduced and the main solutions proposed in the literature for each are discussed.

317 citations


Proceedings ArticleDOI
22 Jun 2013
TL;DR: LinkBench provides a realistic and challenging test for persistent storage of social and web service data, filling a gap in the available tools for researchers, developers and administrators.
Abstract: Database benchmarks are an important tool for database researchers and practitioners that ease the process of making informed comparisons between different database hardware, software and configurations. Large scale web services such as social networks are a major and growing database application area, but currently there are few benchmarks that accurately model web service workloads.In this paper we present a new synthetic benchmark called LinkBench. LinkBench is based on traces from production databases that store "social graph" data at Facebook, a major social network. We characterize the data and query workload in many dimensions, and use the insights gained to construct a realistic synthetic benchmark. LinkBench provides a realistic and challenging test for persistent storage of social and web service data, filling a gap in the available tools for researchers, developers and administrators.

309 citations


Journal ArticleDOI
TL;DR: This article describes updates to BioGPS made after its initial release in 2008, and summarizes recent additions of features and data, as well as the robust user activity that underlies this community intelligence application.
Abstract: Fast-evolving technologies have enabled researchers to easily generate data at genome scale, and using these technologies to compare biological states typically results in a list of candidate genes. Researchers are then faced with the daunting task of prioritizing these candidate genes for follow-up studies. There are hundreds, possibly even thousands, of web-based gene annotation resources available, but it quickly becomes impractical to manually access and review all of these sites for each gene in a candidate gene list. BioGPS (http://biogps.org) was created as a centralized gene portal for aggregating distributed gene annotation resources, emphasizing community extensibility and user customizability. BioGPS serves as a convenient tool for users to access known gene-centric resources, as well as a mechanism to discover new resources that were previously unknown to the user. This article describes updates to BioGPS made after its initial release in 2008. We summarize recent additions of features and data, as well as the robust user activity that underlies this community intelligence application. Finally, we describe MyGene.info (http://mygene. info) and related web services that provide programmatic access to BioGPS.

308 citations


Journal ArticleDOI
TL;DR: This work presents three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).
Abstract: Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

Journal ArticleDOI
TL;DR: This paper proposes a QoS ranking prediction framework for cloud services by taking advantage of the past service usage experiences of other consumers, and shows that the experimental results show that the approaches outperform other competing approaches.
Abstract: Cloud computing is becoming popular. Building high-quality cloud applications is a critical research problem. QoS rankings provide valuable information for making optimal cloud service selection from a set of functionally equivalent service candidates. To obtain QoS values, real-world invocations on the service candidates are usually required. To avoid the time-consuming and expensive real-world service invocations, this paper proposes a QoS ranking prediction framework for cloud services by taking advantage of the past service usage experiences of other consumers. Our proposed framework requires no additional invocations of cloud services when making QoS ranking prediction. Two personalized QoS ranking prediction approaches are proposed to predict the QoS rankings directly. Comprehensive experiments are conducted employing real-world QoS data, including 300 distributed users and 500 real-world web services all over the world. The experimental results show that our approaches outperform other competing approaches.

Proceedings ArticleDOI
25 Mar 2013
TL;DR: This paper applies existing methods for web optimization in a novel manner, such that these methods can be combined with unique knowledge that is only available at the edge (Fog) nodes to improve a user's web page rendering performance.
Abstract: In this paper, we consider web optimization within Fog Computing context. We apply existing methods for web optimization in a novel manner, such that these methods can be combined with unique knowledge that is only available at the edge (Fog) nodes. More dynamic adaptation to the user's conditions (eg. network status and device's computing load) can also be accomplished with network edge specific knowledge. As a result, a user's web page rendering performance is improved beyond that achieved by simply applying those methods at the web server or CDNs.

Proceedings Article
14 Aug 2013
TL;DR: This work investigates the market for fraudulent Twitter accounts to monitor prices, availability, and fraud perpetrated by 27 merchants over the course of a 10-month period, and develops a classifier to retroactively detect several million fraudulent accounts sold via this marketplace.
Abstract: As web services such as Twitter, Facebook, Google, and Yahoo now dominate the daily activities of Internet users, cyber criminals have adapted their monetization strategies to engage users within these walled gardens. To facilitate access to these sites, an underground market has emerged where fraudulent accounts - automatically generated credentials used to perpetrate scams, phishing, and malware - are sold in bulk by the thousands. In order to understand this shadowy economy, we investigate the market for fraudulent Twitter accounts to monitor prices, availability, and fraud perpetrated by 27 merchants over the course of a 10-month period. We use our insights to develop a classifier to retroactively detect several million fraudulent accounts sold via this marketplace, 95% of which we disable with Twitter's help. During active months, the 27 merchants we monitor appeared responsible for registering 10-20% of all accounts later flagged for spam by Twitter, generating $127-459K for their efforts.

Proceedings ArticleDOI
02 Dec 2013
TL;DR: This paper presents an approach to the development of Smart Home applications by integrating Internet of Things (IoT) with Web services and Cloud computing, and implements three use cases to demonstrate the approach's feasibility and efficiency.
Abstract: Smart Home minimizes user's intervention in monitoring home settings and controlling home appliances. This paper presents an approach to the development of Smart Home applications by integrating Internet of Things (IoT) with Web services and Cloud computing. The approach focuses on: (1) embedding intelligence into sensors and actuators using Arduino platform, (2) networking smart things using Zigbee technology, (3) facilitating interactions with smart things using Cloud services, (4) improving data exchange efficiency using JSON data format. Moreover, we implement three use cases to demonstrate the approach's feasibility and efficiency, i.e., measuring home conditions, monitoring home appliances, and controlling home access.

Journal ArticleDOI
TL;DR: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats, which supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bio informatics.
Abstract: Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/ EDAM_1.2.owl.

Journal ArticleDOI
TL;DR: The BASECOL2012 database as mentioned in this paper is a repository of collisional data and a web service within the Virtual Atomic and Molecular Data Centre (VAMDC, http://www.vamdc.eu).
Abstract: The BASECOL2012 database is a repository of collisional data and a web service within the Virtual Atomic and Molecular Data Centre (VAMDC, http://www.vamdc.eu). It contains rate coefficients for the collisional excitation of rotational, ro-vibrational, vibrational, fine, and hyperfine levels of molecules by atoms, molecules, and electrons, as well as fine-structure excitation of some atoms that are relevant to interstellar and circumstellar astrophysical applications. Submissions of new published collisional rate coefficients sets are welcome, and they will be critically evaluated before inclusion in the database. In addition, BASECOL2012 provides spectroscopic data queried dynamically from various spectroscopic databases using the VAMDC technology. These spectroscopic data are conveniently matched to the in-house collisional excitation rate coefficients using the SPECTCOL sofware package (http:// vamdc.eu/software), and the combined sets of data can be downloaded from the BASECOL2012 website. As a partner of the VAMDC, BASECOL2012 is accessible from the general VAMDC portal (http://portal.vamdc.eu) and from user tools such as SPECTCOL.

Proceedings ArticleDOI
23 Oct 2013
TL;DR: In this paper, the authors use the EDNS-client-subnet DNS extension to measure which clients a service maps to which of its serving sites and devise a novel technique that uses this mapping to geolocate servers by combining noisy information about client locations with speed-oflight constraints.
Abstract: Modern content-distribution networks both provide bulk content and act as "serving infrastructure" for web services in order to reduce user-perceived latency. Serving infrastructures such as Google's are now critical to the online economy, making it imperative to understand their size, geographic distribution, and growth strategies. To this end, we develop techniques that enumerate IP addresses of servers in these infrastructures, find their geographic location, and identify the association between clients and clusters of servers. While general techniques for server enumeration and geolocation can exhibit large error, our techniques exploit the design and mechanisms of serving infrastructure to improve accuracy. We use the EDNS-client-subnet DNS extension to measure which clients a service maps to which of its serving sites. We devise a novel technique that uses this mapping to geolocate servers by combining noisy information about client locations with speed-of-light constraints. We demonstrate that this technique substantially improves geolocation accuracy relative to existing approaches. We also cluster server IP addresses into physical sites by measuring RTTs and adapting the cluster thresholds dynamically. Google's serving infrastructure has grown dramatically in the ten months, and we use our methods to chart its growth and understand its content serving strategy. We find that the number of Google serving sites has increased more than sevenfold, and most of the growth has occurred by placing servers in large and small ISPs across the world, not by expanding Google's backbone.

Journal ArticleDOI
TL;DR: WebProtégé is a lightweight ontology editor and knowledge acquisition tool for the Web that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise.
Abstract: In this paper, we present WebProtege---a lightweight ontology editor and knowledge acquisition tool for the Web. With the wide adoption of Web 2.0 platforms and the gradual adoption of ontologies and Semantic Web technologies in the real world, we need ontology-development tools that are better suited for the novel ways of interacting, constructing and consuming knowledge. Users today take Web-based content creation and online collaboration for granted. WebProtege integrates these features as part of the ontology development process itself. We tried to lower the entry barrier to ontology development by providing a tool that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise. The declarative user interface enabled us to create custom knowledge-acquisition forms tailored for domain experts. We built WebProtege using the existing Protege infrastructure, which supports collaboration on the back end side, and the Google Web Toolkit for the front end. The generic and extensible infrastructure allowed us to easily deploy WebProtege in production settings for several projects. We present the main features of WebProtege and its architecture and describe briefly some of its uses for real-world projects. WebProtege is free and open source. An online demo is available at http://webprotege.stanford.edu.

Journal ArticleDOI
TL;DR: This work proposes a novel collaborative filtering algorithm designed for large-scale web service recommendation that employs the characteristic of QoS and achieves considerable improvement on the recommendation accuracy.
Abstract: With the proliferation of web services, effective QoS-based approach to service recommendation is becoming more and more important. Although service recommendation has been studied in the recent literature, the performance of existing ones is not satisfactory, since (1) previous approaches fail to consider the QoS variance according to users' locations; and (2) previous recommender systems are all black boxes providing limited information on the performance of the service candidates. In this paper, we propose a novel collaborative filtering algorithm designed for large-scale web service recommendation. Different from previous work, our approach employs the characteristic of QoS and achieves considerable improvement on the recommendation accuracy. To help service users better understand the rationale of the recommendation and remove some of the mystery, we use a recommendation visualization technique to show how a recommendation is grouped with other choices. Comprehensive experiments are conducted using more than 1.5 million QoS records of real-world web service invocations. The experimental results show the efficiency and effectiveness of our approach.

Book
07 Nov 2013
TL;DR: Designing the Internet of Things helps software engineers, web designers, product designers, and electronics engineers start designing products using the Internet-of-Things approach and explains how to combine sensors, servos, robotics, Arduino chips, and more with various networks or the Internet to create interactive, cutting-edge devices.
Abstract: Take your idea from concept to production with this unique guide Whether it's called physical computing, ubiquitous computing, or the Internet of Things, it's a hot topic in technology: how to channel your inner Steve Jobs and successfully combine hardware, embedded software, web services, electronics, and cool design to create cutting-edge devices that are fun, interactive, and practical. If you'd like to create the next must-have product, this unique book is the perfect place to start. Both a creative and practical primer, it explores the platforms you can use to develop hardware or software, discusses design concepts that will make your products eye-catching and appealing, and shows you ways to scale up from a single prototype to mass production. Helps software engineers, web designers, product designers, and electronics engineers start designing products using the Internet-of-Things approach Explains how to combine sensors, servos, robotics, Arduino chips, and more with various networks or the Internet, to create interactive, cutting-edge devices Provides an overview of the necessary steps to take your idea from concept through production If you'd like to design for the future, Designing the Internet of Things is a great place to start.

Journal ArticleDOI
TL;DR: SyntTax is a synteny web service designed to take full advantage of the large amount or archaeal and bacterial genomes by linking them through taxonomic relationships by providing intuitive access to all completely sequenced prokaryotes.
Abstract: The study of the conservation of gene order or synteny constitutes a powerful methodology to assess the orthology of genomic regions and to predict functional relationships between genes. The exponential growth of microbial genomic databases is expected to improve synteny predictions significantly. Paradoxically, this genomic data plethora, without information on organisms relatedness, could impair the performance of synteny analysis programs. In this work, I present SyntTax, a synteny web service designed to take full advantage of the large amount or archaeal and bacterial genomes by linking them through taxonomic relationships. SyntTax incorporates a full hierarchical taxonomic tree allowing intuitive access to all completely sequenced prokaryotes. Single or multiple organisms can be chosen on the basis of their lineage by selecting the corresponding rank nodes in the tree. The synteny methodology is built upon our previously described Absynte algorithm with several additional improvements. SyntTax aims to produce robust syntenies by providing prompt access to the taxonomic relationships connecting all completely sequenced microbial genomes. The reduction in redundancy offered by lineage selection presents the benefit of increasing accuracy while reducing computation time. This web tool was used to resolve successfully several conserved complex gene clusters described in the literature. In addition, particular features of SyntTax permit the confirmation of the involvement of the four components constituting the E. coli YgjD multiprotein complex responsible for tRNA modification. By analyzing the clustering evolution of alternative gene fusions, new proteins potentially interacting with this complex could be proposed. The web service is available at http://archaea.u-psud.fr/SyntTax .

01 Jan 2013
TL;DR: A proposed methodology, “Securing cloud from DDOS attacks using intrusion detection system in virtual machine”, which detects different kinds of vulnerabilities and can be overcome by using proposed system.
Abstract: Cloud Computing is the newly emerged technology of Distributed Computing System. Cloud Computing user concentrate on API security & provide services to its consumers in multitenant environment into three layers namely, Software as a service, Platform as a service and Infrastructure as a service, with the help of web services. It provides service facilities to its consumers on demand . These service provided can easily invites attacker to attack by Saas ,Paas, Iaas. Since the resources are gathered at one place in data centers in cloud computing, the DDOS attacks such as HTTP & XML in this environment is dangerous & provides harmful effects and also all consumer will be affected at the same time. These attacks can be resolved & detected by a proposed methodology, “Securing cloud from DDOS attacks using intrusion detection system in virtual machine”.In this methodology, this problem can be overcome by using proposed system. The different kinds of vulnerabilities are detected in proposed system. The SOAP request makes the communication between the client and the service provider. Through the Service Oriented Traceback Architecture the SOAP request is send to the cloud. In this architecture service oriented trace back mark is present which contain proxy within it. The proxy that marks the incoming packets with source message identification to identify the real client. Then the SOAP message is travelled via XDetector. The XDetectors used to monitors and filters the DDoS attacks such as HTTP and XML DDoS attack. Finally the filtered real clinet message is transferred to the cloud service provider and the corresponding services is given to the client in secured manner .

Proceedings ArticleDOI
27 Apr 2013
TL;DR: The principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables are described.
Abstract: Advances in data mining and knowledge discovery have transformed the way Web sites are designed. However, while visual presentation is an intrinsic part of the Web, traditional data mining techniques ignore render-time page structures and their attributes. This paper introduces design mining for the Web: using knowledge discovery techniques to understand design demographics, automate design curation, and support data-driven design tools. This idea is manifest in Webzeitgeist, a platform for large-scale design mining comprising a repository of over 100,000 Web pages and 100 million design elements. This paper describes the principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables.

Journal ArticleDOI
01 Feb 2013
TL;DR: The role of trust in mobile service adoption is investigated and empirically examines the trust transfer mechanism, and trust in web services and two relationship-relevant factors namely functional consistency and perceived entitativity are proposed as the predictors oftrust in mobile services.
Abstract: Success in web services cannot promise the success in corresponding mobile services. To understand the mobile service adoption behavior under the context of web-mobile service transition, this study, taking mobile eWOM services as an example, investigates the role of trust in mobile service adoption and empirically examines the trust transfer mechanism. Specifically, trust in web services and two relationship-relevant factors namely functional consistency and perceived entitativity are proposed as the predictors of trust in mobile services. A field survey with 235 mobile eWOM services users is conducted to test the research model and hypotheses. The key findings include (1) trust in mobile services positively influences intention to use mobile services; (2) trust in web services, functional consistency and perceived entitativity positively influence trust in mobile services; (3) functional consistency positively influences perceived entitativity. Limitations, theoretical and practical implications are also discussed.

Proceedings ArticleDOI
13 May 2013
TL;DR: This work proposes a novel cloud resource auto-scaling scheme at the virtual machine (VM) level for web application providers that achieves resourceAutoScaling with an optimal cost-latency trade-off, as well as low SLA violations.
Abstract: In the on-demand cloud environment, web application providers have the potential to scale virtual resources up or down to achieve cost-effective outcomes True elasticity and cost-effectiveness in the pay-per-use cloud business model, however, have not yet been achieved To address this challenge, we propose a novel cloud resource auto-scaling scheme at the virtual machine (VM) level for web application providers The scheme automatically predicts the number of web requests and discovers an optimal cloud resource demand with cost-latency trade-off Based on this demand, the scheme makes a resource scaling decision that is up or down or NOP (no operation) in each time-unit re-allocation We have implemented the scheme on the Amazon cloud platform and evaluated it using three real-world web log datasets Our experiment results demonstrate that the proposed scheme achieves resource auto-scaling with an optimal cost-latency trade-off, as well as low SLA violations

Journal ArticleDOI
TL;DR: This work advances the idea of service-oriented modeling by presenting a design for a modeling service that builds from the Open Geospatial Consortium Web Processing Service (WPS) protocol, and demonstrates how the WPS protocol can be used to create modeling services, and how these modeling services can be brought into workflow environments using generic client-side code.
Abstract: Environmental modeling often requires the use of multiple data sources, models, and analysis routines coupled into a workflow to answer a research question. Coupling these computational resources can be accomplished using various tools, each requiring the developer to follow a specific protocol to ensure that components are linkable. Despite these coupling tools, it is not always straight forward to create a modeling workflow due to platform dependencies, computer architecture requirements, and programming language incompatibilities. A service-oriented approach that enables individual models to operate and interact with others using web services is one method for overcoming these challenges. This work advances the idea of service-oriented modeling by presenting a design for a modeling service that builds from the Open Geospatial Consortium (OGC) Web Processing Service (WPS) protocol. We demonstrate how the WPS protocol can be used to create modeling services, and then demonstrate how these modeling services can be brought into workflow environments using generic client-side code. We implemented this approach within the HydroModeler environment, a model coupling tool built on the Open Modeling Interface standard (version 1.4), and show how a hydrology model can be hosted as a WPS web service and used within a client-side workflow. The primary advantage of this approach is that the server-side software follows an established standard that can be leveraged and reused within multiple workflow environments and decision support systems.

Journal ArticleDOI
TL;DR: The scope and architecture required to support uncertainty management as developed in UncertWeb, which includes tools which support elicitation, aggregation/disaggregation, visualisation and uncertainty/sensitivity analysis, is described.
Abstract: Web-based distributed modelling architectures are gaining increasing recognition as potentially useful tools to build holistic environmental models, combining individual components in complex workflows. However, existing web-based modelling frameworks currently offer no support for managing uncertainty. On the other hand, the rich array of modelling frameworks and simulation tools which support uncertainty propagation in complex and chained models typically lack the benefits of web based solutions such as ready publication, discoverability and easy access. In this article we describe the developments within the UncertWeb project which are designed to provide uncertainty support in the context of the proposed 'Model Web'. We give an overview of uncertainty in modelling, review uncertainty management in existing modelling frameworks and consider the semantic and interoperability issues raised by integrated modelling. We describe the scope and architecture required to support uncertainty management as developed in UncertWeb. This includes tools which support elicitation, aggregation/disaggregation, visualisation and uncertainty/sensitivity analysis. We conclude by highlighting areas that require further research and development in UncertWeb, such as model calibration and inference within complex environmental models.

Proceedings ArticleDOI
04 Nov 2013
TL;DR: This paper uses the browsers of a collection of web users to record their interactions with websites, as well as the redirections they go through to reach their final destinations, and analyzes how a large and diverse set of web browsers reach these pages to detect malicious pages.
Abstract: The web is one of the most popular vectors to spread malware. Attackers lure victims to visit compromised web pages or entice them to click on malicious links. These victims are redirected to sites that exploit their browsers or trick them into installing malicious software using social engineering.In this paper, we tackle the problem of detecting malicious web pages from a novel angle. Instead of looking at particular features of a (malicious) web page, we analyze how a large and diverse set of web browsers reach these pages. That is, we use the browsers of a collection of web users to record their interactions with websites, as well as the redirections they go through to reach their final destinations. We then aggregate the different redirection chains that lead to a specific web page and analyze the characteristics of the resulting redirection graph. As we will show, these characteristics can be used to detect malicious pages.We argue that our approach is less prone to evasion than previous systems, allows us to also detect scam pages that rely on social engineering rather than only those that exploit browser vulnerabilities, and can be implemented efficiently. We developed a system, called SpiderWeb, which implements our proposed approach. We show that this system works well in detecting web pages that deliver malware.

Proceedings ArticleDOI
19 Apr 2013
TL;DR: It is argued that certificates - with improvements to the handshake - are a viable method of authentication in many network scenarios and three design ideas to reduce the overheads of the DTLS handshake are proposed.
Abstract: The vision of the Internet of Things considers smart objects in the physical world as first-class citizens of the digital world. Especially IP technology and RESTful web services on smart objects promise simple interactions with Internet services in the Web of Things, e.g., for building automation or in e-health scenarios. Peer authentication and secure data transmission are vital aspects in many of these scenarios to prevent leakage of personal information and harmful actuating tasks. While standard security solutions exist for traditional IP networks, the constraints of smart objects demand for more lightweight security mechanisms. Thus, the use of certificates for peer authentication is predominantly considered impracticable. In this paper, we investigate if this assumption is valid. To this end, we present preliminary overhead estimates for the certificate-based DTLS handshake and argue that certificates - with improvements to the handshake - are a viable method of authentication in many network scenarios. We propose three design ideas to reduce the overheads of the DTLS handshake. These ideas are based on (i) pre-validation, (ii) session resumption, and (iii) handshake delegation. We qualitatively analyze the expected overhead reductions and discuss their applicability.