scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 2011"


Journal ArticleDOI
TL;DR: This work has focused on minimizing search times and the ability to rapidly display tabular results, regardless of the number of matches found, developing graphical summaries of the search results to provide quick, intuitive appraisement of them.
Abstract: HMMER is a software suite for protein sequence similarity searches using probabilistic methods Previously, HMMER has mainly been available only as a computationally intensive UNIX command-line tool, restricting its use Recent advances in the software, HMMER3, have resulted in a 100-fold speed gain relative to previous versions It is now feasible to make efficient profile hidden Markov model (profile HMM) searches via the web A HMMER web server (http://hmmerjaneliaorg) has been designed and implemented such that most protein database searches return within a few seconds Methods are available for searching either a single protein sequence, multiple protein sequence alignment or profile HMM against a target sequence database, and for searching a protein sequence against Pfam The web server is designed to cater to a range of different user expertise and accepts batch uploading of multiple queries at once All search methods are also available as RESTful web services, thereby allowing them to be readily integrated as remotely executed tasks in locally scripted workflows We have focused on minimizing search times and the ability to rapidly display tabular results, regardless of the number of matches found, developing graphical summaries of the search results to provide quick, intuitive appraisement of them

4,159 citations


BookDOI
01 Jan 2011

1,549 citations


Journal ArticleDOI
TL;DR: Current version of iTOL introduces numerous new features and greatly expands the number of supported data set types.
Abstract: Interactive Tree Of Life (http://itol.embl.de) is a web-based tool for the display, manipulation and annotation of phylogenetic trees. It is freely available and open to everyone. In addition to classical tree viewer functions, iTOL offers many novel ways of annotating trees with various additional data. Current version introduces numerous new features and greatly expands the number of supported data set types. Trees can be interactively manipulated and edited. A free personal account system is available, providing management and sharing of trees in user defined workspaces and projects. Export to various bitmap and vector graphics formats is supported. Batch access interface is available for programmatic access or inclusion of interactive trees into other web services.

1,446 citations


Journal ArticleDOI
TL;DR: SwissDock, a web server dedicated to the docking of small molecules on target proteins, is presented, based on the EADock DSS engine, combined with setup scripts for curating common problems and for preparing both the target protein and the ligand input files.
Abstract: Most life science processes involve, at the atomic scale, recognition between two molecules. The prediction of such interactions at the molecular level, by so-called docking software, is a non-trivial task. Docking programs have a wide range of applications ranging from protein engineering to drug design. This article presents SwissDock, a web server dedicated to the docking of small molecules on target proteins. It is based on the EADock DSS engine, combined with setup scripts for curating common problems and for preparing both the target protein and the ligand input files. An efficient Ajax/HTML interface was designed and implemented so that scientists can easily submit dockings and retrieve the predicted complexes. For automated docking tasks, a programmatic SOAP interface has been set up and template programs can be downloaded in Perl, Python and PHP. The web site also provides an access to a database of manually curated complexes, based on the Ligand Protein Database. A wiki and a forum are available to the community to promote interactions between users. The SwissDock web site is available online at http://www.swissdock.ch. We believe it constitutes a step toward generalizing the use of docking tools beyond the traditional molecular modeling community.

1,305 citations


Proceedings ArticleDOI
07 Sep 2011
TL;DR: DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs, is developed, and results are evaluated in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of the system.
Abstract: Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use.

1,228 citations


Journal ArticleDOI
TL;DR: A web-based interface that enables biologists to browse and search a comprehensive collection of pathways from multiple sources represented in a common language, a download site that provides integrated bulk sets of pathway information in standard or convenient formats and a web service that software developers can use to conveniently query and access all data.
Abstract: Pathway Commons (http://www.pathwaycommons.org) is a collection of publicly available pathway data from multiple organisms. Pathway Commons provides a web-based interface that enables biologists to browse and search a comprehensive collection of pathways from multiple sources represented in a common language, a download site that provides integrated bulk sets of pathway information in standard or convenient formats and a web service that software developers can use to conveniently query and access all data. Database providers can share their pathway data via a common repository. Pathways include biochemical reactions, complex assembly, transport and catalysis events and physical interactions involving proteins, DNA, RNA, small molecules and complexes. Pathway Commons aims to collect and integrate all public pathway data available in standard formats. Pathway Commons currently contains data from nine databases with over 1400 pathways and 687,000 interactions and will be continually expanded and updated.

1,095 citations


Journal ArticleDOI
TL;DR: This paper proposes a collaborative filtering approach for predicting QoS values of Web services and making Web service recommendation by taking advantages of past usage experiences of service users, and shows that the algorithm achieves better prediction accuracy than other approaches.
Abstract: With increasing presence and adoption of Web services on the World Wide Web, Quality-of-Service (QoS) is becoming important for describing nonfunctional characteristics of Web services. In this paper, we present a collaborative filtering approach for predicting QoS values of Web services and making Web service recommendation by taking advantages of past usage experiences of service users. We first propose a user-collaborative mechanism for past Web service QoS information collection from different service users. Then, based on the collected QoS data, a collaborative filtering approach is designed to predict Web service QoS values. Finally, a prototype called WSRec is implemented by Java language and deployed to the Internet for conducting real-world experiments. To study the QoS value prediction accuracy of our approach, 1.5 millions Web service invocation results are collected from 150 service users in 24 countries on 100 real-world Web services in 22 countries. The experimental results show that our algorithm achieves better prediction accuracy than other approaches. Our Web service QoS data set is publicly released for future research.

741 citations


Journal ArticleDOI
TL;DR: The National Center for Biomedical Ontology (NCBO) has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies via the NCBO Web services.
Abstract: The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

692 citations


Journal ArticleDOI
TL;DR: A two step approach has been developed, which consists of preparing P2N file(s) to rigorously define key elements such as atom names, topology and chemical equivalencing needed when building a force field library, to derive rigorously molecular electrostatic potential-based charges embedded in force field libraries.
Abstract: R.E.D. Server is a unique, open web service, designed to derive non-polarizable RESP and ESP charges and to build force field libraries for new molecules/molecular fragments. It provides to computational biologists the means to derive rigorously molecular electrostatic potential-based charges embedded in force field libraries that are ready to be used in force field development, charge validation and molecular dynamics simulations. R.E.D. Server interfaces quantum mechanics programs, the RESP program and the latest version of the R.E.D. tools. A two step approach has been developed. The first one consists of preparing P2N file(s) to rigorously define key elements such as atom names, topology and chemical equivalencing needed when building a force field library. Then, P2N files are used to derive RESP or ESP charges embedded in force field libraries in the Tripos mol2 format. In complex cases an entire set of force field libraries or force field topology database is generated. Other features developed in R.E.D. Server include help services, a demonstration, tutorials, frequently asked questions, Jmol-based tools useful to construct PDB input files and parse R.E.D. Server outputs as well as a graphical queuing system allowing any user to check the status of R.E.D. Server jobs.

636 citations


Journal ArticleDOI
TL;DR: The next generation of the RCSB PDB web site, as described here, provides a rich resource for research and education and enables a range of new possibilities to analyze and understand structure data.
Abstract: The RCSB Protein Data Bank (RCSB PDB) web site (http://www.pdb.org) has been redesigned to increase usability and to cater to a larger and more diverse user base. This article describes key enhancements and new features that fall into the following categories: (i) query and analysis tools for chemical structure searching, query refinement, tabulation and export of query results; (ii) web site customization and new structure alerts; (iii) pair-wise and representative protein structure alignments; (iv) visualization of large assemblies; (v) integration of structural data with the open access literature and binding affinity data; and (vi) web services and web widgets to facilitate integration of PDB data and tools with other resources. These improvements enable a range of new possibilities to analyze and understand structure data. The next generation of the RCSB PDB web site, as described here, provides a rich resource for research and education.

598 citations


Journal ArticleDOI
TL;DR: Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?
Abstract: Web apps are cheaper to develop and deploy than native apps, but can they match the native user experience?

Proceedings ArticleDOI
22 May 2011
TL;DR: It is shown that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services, and the distinctions between email and Twitter spam are explored.
Abstract: On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarch's scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs/day -- for a bit under $800/day.

Proceedings ArticleDOI
28 Mar 2011
TL;DR: A web service that tracks political memes in Twitter and helps detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections is demonstrated.
Abstract: Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We demonstrate a web service that tracks political memes in Twitter and helps detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We also present some cases of abusive behaviors uncovered by our service. Our web service is based on an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events.

Journal ArticleDOI
01 Mar 2011-Sensors
TL;DR: The recent developments of the new generation of the Sensor Web Enablement specification framework are illustrated and related to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web enablement.
Abstract: Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement.

Journal ArticleDOI
01 Jan 2011-Database
TL;DR: This study reviews 28 Web tools that provide comparable literature search service to PubMed, highlights their respective innovations, compares them to the PubMed system and one another, and discusses directions for future development.
Abstract: The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search

Proceedings ArticleDOI
02 Nov 2011
TL;DR: This paper presents results on app usage at a national level using anonymized network measurements from a tier-1 cellular carrier in the U.S. and identifies traffic from distinct marketplace apps based on HTTP signatures and presents aggregate results on their spatial and temporal prevalence, locality, and correlation.
Abstract: Smartphone users are increasingly shifting to using apps as "gateways" to Internet services rather than traditional web browsers. App marketplaces for iOS, Android, and Windows Phone platforms have made it attractive for developers to deploy apps and easy for users to discover and start using many network-enabled apps quickly. For example, it was recently reported that the iOS AppStore has more than 350K apps and more than 10 billion downloads. Furthermore, the appearance of tablets and mobile devices with other form factors, which also use these marketplaces, has increased the diversity in apps and their user population. Despite the increasing importance of apps as gateways to network services, we have a much sparser understanding of how, where, and when they are used compared to traditional web services, particularly at scale. This paper takes a first step in addressing this knowledge gap by presenting results on app usage at a national level using anonymized network measurements from a tier-1 cellular carrier in the U.S. We identify traffic from distinct marketplace apps based on HTTP signatures and present aggregate results on their spatial and temporal prevalence, locality, and correlation.

Journal ArticleDOI
TL;DR: This work proposes a novel class of algorithms called Join-Idle-Queue (JIQ) for distributed load balancing in large systems, which effectively results in a reduced system load and produces 30-fold reduction in queueing overhead compared to Power-of-Two at medium to high load.

Journal ArticleDOI
TL;DR: The vision and architecture of a Semantic Web of Things is described: a service infrastructure that makes the deployment and use of semantic applications involving Internet-connected sensors almost as easy as building, searching, and reading a web page today.
Abstract: The developed world is awash with sensors. However, they are typically locked into unimodal closed systems. To unleash their full potential, access to sensors should be opened such that their data and services can be integrated with data and services available in other information systems, facilitating novel applications and services that are based on the state of the real world. We describe our vision and architecture of a Semantic Web of Things: a service infrastructure that makes the deployment and use of semantic applications involving Internet-connected sensors almost as easy as building, searching, and reading a web page today.

Journal ArticleDOI
TL;DR: A web tool 'IMPaLA' is presented for the joint pathway analysis of transcriptomics or proteomics and metabolomics data and performs over-representation or enrichment analysis with user-specified lists of metabolites and genes using over 3000 pre-annotated pathways from 11 databases.
Abstract: Summary:Pathway-level analysis is a powerful approach enabling interpretation of post-genomic data at a higher level than that of individual biomolecules. Yet, it is currently hard to integrate more than one type of omics data in such an approach. Here, we present a web tool ‘IMPaLA’ for the joint pathway analysis of transcriptomics or proteomics and metabolomics data. It performs over-representation or enrichment analysis with user-specified lists of metabolites and genes using over 3000 pre-annotated pathways from 11 databases. As a result, pathways can be identified that may be disregulated on the transcriptional level, the metabolic level or both. Evidence of pathway disregulation is combined, allowing for the identification of additional pathways with changed activity that would not be highlighted when analysis is applied to any of the functional levels alone. The tool has been implemented both as an interactive website and as a web service to allow a programming interface. Availability:The web interface of IMPaLA is available at http://impala.molgen.mpg.de. A web services programming interface is provided at http://impala.molgen.mpg.de/wsdoc. Contact:kamburov@molgen.mpg.de; r.cavill@imperial.ac.uk; h.keun@imperial.ac.uk Supplementary Information:Supplementary data are available at Bioinformatics online.

Proceedings ArticleDOI
23 May 2011
TL;DR: It is found that the performance of about half of the cloud services investigated exhibits yearly and daily patterns, but also that most services have periods of especially stable performance, which gives evidence that performance variability can be an important factor in cloud provider selection.
Abstract: Cloud computing is an emerging infrastructure paradigm that promises to eliminate the need for companies to maintain expensive computing hardware. Through the use of virtualization and resource time-sharing, clouds address with a single set of physical resources a large user base with diverse needs. Thus, clouds have the potential to provide their owners the benefits of an economy of scale and, at the same time, become an alternative for both the industry and the scientific community to self-owned clusters, grids, and parallel production environments. For this potential to become reality, the first generation of commercial clouds need to be proven to be dependable. In this work we analyze the dependability of cloud services. Towards this end, we analyze long-term performance traces from Amazon Web Services and Google App Engine, currently two of the largest commercial clouds in production. We find that the performance of about half of the cloud services we investigate exhibits yearly and daily patterns, but also that most services have periods of especially stable performance. Last, through trace-based simulation we assess the impact of the variability observed for the studied cloud services on three large-scale applications, job execution in scientific computing, virtual goods trading in social networks, and state management in social gaming. We show that the impact of performance variability depends on the application, and give evidence that performance variability can be an important factor in cloud provider selection.

Patent
Mark Lucovsky1, Derek Collison1, Vadim Spivak1, Gerald C. Chen1, Ramnivas Laddad1 
21 Apr 2011
TL;DR: In this article, the authors propose a cloud computing environment that provides the ability to deploy a web application that has been developed using one of a plurality of application frameworks and is configured to execute within one of the plurality of runtime environments.
Abstract: A cloud computing environment provides the ability to deploy a web application that has been developed using one of a plurality of application frameworks and is configured to execute within one of a plurality of runtime environments. The cloud computing environment receives the web application in a package compatible with the runtime environment (e.g., a WAR file to be launched in an application server, for example) and dynamically binds available services by appropriately inserting service provisioning data (e.g., service network address, login credentials, etc.) into the package. The cloud computing environment then packages an instance of the runtime environment, a start script and the package into a web application deployment package, which is then transmitted to an application (e.g., container virtual machine, etc.). The application container unpacks the web application deployment package, installs the runtime environment, loads the web application package into the runtime environment and starts the start script, thereby deploying the web application in the application container.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: This paper presents a big data placement structure called RCFile (Record Columnar File) and its implementation in the Hadoop system and shows the effectiveness of RCFile in satisfying the four requirements.
Abstract: MapReduce-based data warehouse systems are playing important roles of supporting big data analytics to understand quickly the dynamics of user behavior trends and their needs in typical Web service providers and social network sites (e.g., Facebook). In such a system, the data placement structure is a critical factor that can affect the warehouse performance in a fundamental way. Based on our observations and analysis of Facebook production systems, we have characterized four requirements for the data placement structure: (1) fast data loading, (2) fast query processing, (3) highly efficient storage space utilization, and (4) strong adaptivity to highly dynamic workload patterns. We have examined three commonly accepted data placement structures in conventional databases, namely row-stores, column-stores, and hybrid-stores in the context of large data analysis using MapReduce. We show that they are not very suitable for big data processing in distributed systems. In this paper, we present a big data placement structure called RCFile (Record Columnar File) and its implementation in the Hadoop system. With intensive experiments, we show the effectiveness of RCFile in satisfying the four requirements. RCFile has been chosen in Facebook data warehouse system as the default option. It has also been adopted by Hive and Pig, the two most widely used data analysis systems developed in Facebook and Yahoo!

Patent
28 Feb 2011
TL;DR: In this paper, an encoded acoustic signal is employed for authenticating a user to a web site hosted by a web server, where the smart phone securely communicates with an authentication server which informs the web server whether the user has been authenticated or not.
Abstract: Techniques for simplifying an authentication process from the viewpoint of a user while providing improved security to the many users currently employing no or weak security techniques. In logging into a web site hosted by a web server, a session begins by a user connecting and logging in with a device, such as a personal computer. Rather than a user name and password approach which is presently typical, the personal computer communicates with another user device, such as a smart phone. In one approach, an encoded acoustic signal is employed for this communication. The smart phone securely communicates with an authentication server which informs the web server whether the user has been authenticated or not.

Patent
13 Jul 2011
TL;DR: In this article, a method, system and computer program product for providing translated web content is disclosed, which includes receiving a request from a user on a web site, the web site having a first web content in a first language, wherein the request calls for a second web contents in a second language, and the method further includes dividing the web content into a plurality of translatable components and generating a unique identifier for each translatable component.
Abstract: A method, system and computer program product for providing translated web content is disclosed. The method includes receiving a request from a user on a web site, the web site having a first web content in a first language, wherein the request calls for a second web content in a second language. The method further includes dividing the first web content into a plurality of translatable components and generating a unique identifier for each translatable component. The method further includes identifying a plurality of translated components of the second web content using the unique identifier of each of the plurality of translatable components of the first web content and putting the plurality of translated components of the second web content to preserve a format that corresponds to the first web content. The method further includes providing the second web content in response to the request that was received.

Journal ArticleDOI
TL;DR: The architecture and some key enabling technologies of Web of Things (WoT) are elaborate and many systematic comparisons are made to provide the insight in the evolution and future of WoT.
Abstract: In the vision of the Internet of Things (IoT), an increasing number of embedded devices of all sorts (e.g., sensors, mobile phones, cameras, smart meters, smart cars, traffic lights, smart home appliances, etc.) are now capable of communicating and sharing data over the Internet. Although the concept of using embedded systems to control devices, tools and appliances has been proposed for almost decades now, with every new generation, the ever-increasing capabilities of computation and communication pose new opportunities, but also new challenges. As IoT becomes an active research area, different methods from various points of view have been explored to promote the development and popularity of IoT. One trend is viewing IoT as Web of Things (WoT) where the open Web standards are supported for information sharing and device interoperation. By penetrating smart things into existing Web, the conventional web services are enriched with physical world services. This WoT vision enables a new way of narrowing the barrier between virtual and physical worlds.In this paper, we elaborate the architecture and some key enabling technologies of WoT. Some pioneer open platforms and prototypes are also illustrated. The most recent research results are carefully summarized. Furthermore, many systematic comparisons are made to provide the insight in the evolution and future of WoT. Finally, we point out some open challenging issues that shall be faced and tackled by research community.

Proceedings ArticleDOI
26 Oct 2011
TL;DR: This investigation takes the move by decomposing the storyline of a day in Robert's life, the authors' unlucky character in the not so far future, into simple processes and their interactions, and devise the main communication requirements for those processes and for their integration in the Internet as web services.
Abstract: This paper proposes the Internet of Things communication framework as the main enabler for distributed worldwide health care applications. Starting from the recent availability of wireless medical sensor prototypes and the growing diffusion of electronic health care record databases, we analyze the requirements of a unified communication framework. Our investigation takes the move by decomposing the storyline of a day in Robert's life, our unlucky character in the not so far future, into simple processes and their interactions. Subsequently, we devise the main communication requirements for those processes and for their integration in the Internet as web services. Finally, we present the Internet of Things protocol stack and the advantages it brings to health care scenarios in terms of the identified requirements.

Journal ArticleDOI
TL;DR: A Web services framework for APBS and PDB2PQR is developed that enables the use of these software packages by users who do not have local access to the necessary amount of computational capabilities and increases the availability of electrostatics calculations on portable computing platforms.
Abstract: APBS and PDB2PQR are widely utilized free software packages for biomolecular electrostatics calculations. Using the Opal toolkit, we have developed a Web services framework for these software packages that enables the use of APBS and PDB2PQR by users who do not have local access to the necessary amount of computational capabilities. This not only increases accessibility of the software to a wider range of scientists, educators, and students but also increases the availability of electrostatics calculations on portable computing platforms. Users can access this new functionality in two ways. First, an Opal-enabled version of APBS is provided in current distributions, available freely on the web. Second, we have extended the PDB2PQR web server to provide an interface for the setup, execution, and visualization of electrostatic potentials as calculated by APBS. This web interface also uses the Opal framework which ensures the scalability needed to support the large APBS user community. Both of these resources are available from the APBS/PDB2PQR website: http://www.poissonboltzmann.org/.

Journal ArticleDOI
TL;DR: This paper recreate some of the current attacks that attackers may initiate as HTTP and XML, and introduces the use of a back propagation neutral network, called Cloud Protector, which was trained to detect and filter such attack traffic.

Proceedings ArticleDOI
29 Nov 2011
TL;DR: This paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users, which requires no additional invocation of Web services.
Abstract: The exponential growth of Web service makes building high-quality service-oriented applications an urgent and crucial research problem. User-side QoS evaluations of Web services are critical for selecting the optimal Web service from a set of functionally equivalent service candidates. Since QoS performance of Web services is highly related to the service status and network environments which are variable against time, service invocations are required at different instances during a long time interval for making accurate Web service QoS evaluation. However, invoking a huge number of Web services from user-side for quality evaluation purpose is time-consuming, resource-consuming, and sometimes even impractical (e.g., service invocations are charged by service providers). To address this critical challenge, this paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users. WSPred requires no additional invocation of Web services. Based on the past Web service usage experience from different service users, WSPred builds feature models and employs these models to make personalized QoS prediction for different users. The extensive experimental results show the effectiveness and efficiency of WSPred. Moreover, we publicly release our real-world time-aware Web service QoS dataset for future research, which makes our experiments verifiable and reproducible.

Journal ArticleDOI
TL;DR: To process large collections of peak sequences obtained from ChIP-seq or related technologies, RSAT provides a new program (peak-motifs) that combines several efficient motif discovery algorithms to predict transcription factor binding motifs, match them against motif databases and predict their binding sites.
Abstract: RSAT (Regulatory Sequence Analysis Tools) comprises a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. Thirteen new programs have been added to the 30 described in the 2008 NAR Web Software Issue, including an automated sequence retrieval from EnsEMBL (retrieve-ensembl-seq), two novel motif discovery algorithms (oligo-diff and info-gibbs), a 100-times faster version of matrix-scan enabling the scanning of genome-scale sequence sets, and a series of facilities for random model generation and statistical evaluation (random-genome-fragments, random-motifs, random-sites, implant-sites, sequence-probability, permute-matrix). Our most recent work also focused on motif comparison (compare-matrices) and evaluation of motif quality (matrix-quality) by combining theoretical and empirical measures to assess the predictive capability of position-specific scoring matrices. To process large collections of peak sequences obtained from ChIP-seq or related technologies, RSAT provides a new program (peak-motifs) that combines several efficient motif discovery algorithms to predict transcription factor binding motifs, match them against motif databases and predict their binding sites. Availability (web site, stand-alone programs and SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services): http://rsat.ulb.ac.be/rsat/.