scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2015"


Journal ArticleDOI
TL;DR: This article establishes a consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools and reviews the state of the art in service composition from an unprecedented, holistic perspective.
Abstract: Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective.

277 citations


Journal ArticleDOI
TL;DR: The processing of the simple datasets used in the pilot proved to be relatively straightforward using a combination of R, RPy2, PyWPS and PostgreSQL, but the use of NoSQL databases and more versatile frameworks such as OGC standard based implementations may provide a wider and more flexible set of features that particularly facilitate working with larger volumes and more heterogeneous data sources.
Abstract: Recent evolutions in computing science and web technology provide the environmental community with continuously expanding resources for data collection and analysis that pose unprecedented challenges to the design of analysis methods, workflows, and interaction with data sets. In the light of the recent UK Research Council funded Environmental Virtual Observatory pilot project, this paper gives an overview of currently available implementations related to web-based technologies for processing large and heterogeneous datasets and discuss their relevance within the context of environmental data processing, simulation and prediction. We found that, the processing of the simple datasets used in the pilot proved to be relatively straightforward using a combination of R, RPy2, PyWPS and PostgreSQL. However, the use of NoSQL databases and more versatile frameworks such as OGC standard based implementations may provide a wider and more flexible set of features that particularly facilitate working with larger volumes and more heterogeneous data sources. We review web service related technologies to manage, transfer and process Big Data.We examine international standards and related implementations.Many existing algorithms can be easily exposed as services and cloud-enabled.The adoption of standards facilitate the implementation of workflows.Use of web technologies to tackle environmental issues is acknowledged worldwide.

203 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach that unifies collaborative filtering and content-based recommendation of web services using a probabilistic generative model, which outperforms the state-of-the-art methods on recommendation performance.
Abstract: The last decade has witnessed a tremendous growth of web services as a major technology for sharing data, computing resources, and programs on the web. With increasing adoption and presence of web services, designing novel approaches for efficient and effective web service recommendation has become of paramount importance. Most existing web service discovery and recommendation approaches focus on either perishing UDDI registries, or keyword-dominant web service search engines, which possess many limitations such as poor recommendation performance and heavy dependence on correct and complex queries from users. It would be desirable for a system to recommend web services that align with users’ interests without requiring the users to explicitly specify queries. Recent research efforts on web service recommendation center on two prominent approaches: collaborative filtering and content-based recommendation . Unfortunately, both approaches have some drawbacks, which restrict their applicability in web service recommendation. In this paper, we propose a novel approach that unifies collaborative filtering and content-based recommendations. In particular, our approach considers simultaneously both rating data (e.g., QoS) and semantic content data (e.g., functionalities) of web services using a probabilistic generative model. In our model, unobservable user preferences are represented by introducing a set of latent variables, which can be statistically estimated. To verify the proposed approach, we conduct experiments using 3,693 real-world web services. The experimental results show that our approach outperforms the state-of-the-art methods on recommendation performance.

171 citations


Proceedings ArticleDOI
30 Nov 2015
TL;DR: STAC (Statistical Tests for Algorithms Comparison), a new platform for statistical analysis to verify the results obtained from computational intelligence algorithms, is presented.
Abstract: One of the most suited techniques for comparing results obtained from computational intelligence algorithms is the statistical hypothesis testing. This method can be used to contrast if the difference between the algorithm with the best results and other algorithms is actually significant. In this paper, we present STAC (Statistical Tests for Algorithms Comparison), a new platform for statistical analysis to verify the results obtained from computational intelligence algorithms. STAC consists of three different layers for performing statistical tests: a Python library, a set of web services and a web client. We show several use cases, in which both non-expert and expert users interact with the web client and use the web services in different programming languages.

151 citations


Journal ArticleDOI
01 Jun 2015
TL;DR: This paper discusses the challenging problem of having active malicious Web services in the composite and community-based architectures and can be used by the future researchers as a roadmap to explore new trust and reputation models for Web services taking into account the shortcomings of the existing models.
Abstract: Web service selection constitutes nowadays a major challenge that is still attracting the research community to work on and investigate. The problem arises since decision makers (1) cannot blindly trust the service or its provider, and (2) ignore the environment within which the service is operating. The fact that no security mechanism is applicable in such a completely open environment, where identities can be easily generated and discarded makes social approaches such as trust and reputation models appealing to apply in the world of Web services. This survey classifies and compares the main findings that contributed in solving problems related to trust and reputation in the context of Web services. First, a high-level classification scheme partitions Web services into three main architectures: single, composite, and communities. Thereafter, a low-level classification within each architecture categorizes the trust and reputation models according to the technique used to build the trust value. Based on this classification, a profound analysis describing the advantages and shortcomings of each class of models is presented; leading to uncover possible topics that need further study and investigation. In particular, we discuss the challenging problem of having active malicious Web services in the composite and community-based architectures. Thus, the paper can be used by the future researchers as a roadmap to explore new trust and reputation models for Web services taking into account the shortcomings of the existing models. Defining Web services' architectures and their points of convergence and differenceProviding a sub-classification within each architecture based on trust computationProposing a taxonomy of criteria for each architectureComparing the class models and approaches in each architectureDiscussing limitations and future directions for each architecture

122 citations


Book
Peter Ertl1
01 Jun 2015
TL;DR: This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web, and a typical example - the popular JME Molecule Editor - will be described in more detail.
Abstract: A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential. The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

91 citations


Journal ArticleDOI
TL;DR: The practicalities and effectiveness of web mining as a research method for innovation studies are examined, using web mining to explore the R&D activities of 296 UK-based green goods small and mid-size enterprises.
Abstract: As enterprises expand and post increasing information about their business activities on their websites, website data promises to be a valuable source for investigating innovation. This article examines the practicalities and effectiveness of web mining as a research method for innovation studies. We use web mining to explore the R&D activities of 296 UK-based green goods small and mid-size enterprises. We find that website data offers additional insights when compared with other traditional unobtrusive research methods, such as patent and publication analysis. We examine the strengths and limitations of enterprise innovation web mining in terms of a wide range of data quality dimensions, including accuracy, completeness, currency, quantity, flexibility and accessibility. We observe that far more companies in our sample report undertaking R&D activities on their web sites than would be suggested by looking only at conventional data sources. While traditional methods offer information about the early phases of R&D and invention through publications and patents, web mining offers insights that are more downstream in the innovation process. Handling website data is not as easy as alternative data sources, and care needs to be taken in executing search strategies. Website information is also self-reported and companies may vary in their motivations for posting (or not posting) information about their activities on websites. Nonetheless, we find that web mining is a significant and useful complement to current methods, as well as offering novel insights not easily obtained from other unobtrusive sources.

88 citations


Journal ArticleDOI
TL;DR: An improved time-aware collaborative filtering approach for high-quality web service recommendation that integrates time information into both similarity measurement and QoS prediction, and a hybrid personalized random walk algorithm is designed to infer indirect user similarities and service similarities.
Abstract: With the incessant growth of web services on the Internet, how to design effective web service recommendation technologies based on Quality of Service (QoS) is becoming more and more important. Web service recommendation can relieve users from tough work on service selection and improve the efficiency of developing service-oriented applications. Neighborhood-based collaborative filtering has been widely used for web service recommendation, in which similarity measurement and QoS prediction are two key issues. However, traditional similarity models and QoS prediction methods rarely consider the influence of time information, which is an important factor affecting the QoS performance of web services. Furthermore, it is difficult for the existing similarity models to capture the actual relationships between users or services due to data sparsity. The two shortcomings seriously devalue the performance of neighborhood-based collaborative filtering. In this paper, the authors propose an improved time-aware collaborative filtering approach for high-quality web service recommendation. Our approach integrates time information into both similarity measurement and QoS prediction. Additionally, in order to alleviate the data sparsity problem, a hybrid personalized random walk algorithm is designed to infer indirect user similarities and service similarities. Finally, a series of experiments are provided to validate the effectiveness of our approach.

87 citations


Journal ArticleDOI
TL;DR: A framework for estimating, planning and managing Web projects is presented, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features.
Abstract: Graphical abstractThis paper tries to answer the following question by identifying Agile practices and adapting them for being integrated into a coherent framework "Is it possible to define an Agile approach to estimate, plan and manage Web projects guided by business value". The paper includes the results obtained from a real experience dealing with applying the proposed framework. To finish, it states relevant conclusion.Display Omitted We propose a framework for Agile Web projects.Our framework focuses on estimation, planning, and management activities.We recommend an approach guided by business value.Our approach includes continuous improvement along the project.We present our first empirical experience of this framework.We take out a set of first conclusions in order to extent the model. ContextThe processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. ObjectiveThis paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. MethodThe proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. ResultsThe results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. ConclusionIt is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.

79 citations


Journal ArticleDOI
TL;DR: In an effort to understand the web of FOSS features and capabilities, many of the state-of-the-art FOSS software projects are reviewed in the context of those used to develop water resources web apps published in the peer-reviewed literature in the last decade.
Abstract: Water resources web applications or "web apps" are growing in popularity as a means to overcome many of the challenges associated with hydrologic simulations in decision-making. Water resources web apps fall outside of the capabilities of standard web development software, because of their spatial data components. These spatial data needs can be addressed using a combination of existing free and open source software (FOSS) for geographic information systems (FOSS4G) and FOSS for web development. However, the abundance of FOSS projects that are available can be overwhelming to new developers. In an effort to understand the web of FOSS features and capabilities, we reviewed many of the state-of-the-art FOSS software projects in the context of those that have been used to develop water resources web apps published in the peer-reviewed literature in the last decade (2004-2014). Free and open source software can be used to address the needs of water resources web applications.The large number of open source projects can be overwhelming to novice developers.We present a review of the free and open source GIS and web development software.Software reviewed includes those projects used to develop 45 water resources and earth science web applications.The review highlights 11 FOSS4G software projects and 9 FOSS projects for web development.

79 citations


Journal ArticleDOI
TL;DR: This paper presents an evolutionary migration process for web application clusters distributed over multiple locations, and presents a multi-criteria-based selection algorithm based on Analytic Hierarchy Process (AHP).
Abstract: With the increase in cloud service providers, and the increasing number of compute services offered, a migration of information systems to the cloud demands selecting the best mix of compute services and virtual machine (VM ) images from an abundance of possibilities. Therefore, a migration process for web applications has to automate evaluation and, in doing so, ensure that Quality of Service (QoS) requirements are met, while satisfying conflicting selection criteria like throughput and cost. When selecting compute services for multiple connected software components, web application engineers must consider heterogeneous sets of criteria and complex dependencies across multiple layers, which is impossible to resolve manually. The previously proposed CloudGenius framework has proven its capability to support migrations of single-component web applications. In this paper, we expand on the additional complexity of facilitating migration support for multi-component web applications. In particular, we present an evolutionary migration process for web application clusters distributed over multiple locations, and clearly identify the most important criteria relevant to the selection problem. Moreover, we present a multi-criteria-based selection algorithm based on Analytic Hierarchy Process (AHP). Because the solution space grows exponentially, we developed a Genetic Algorithm (GA)-based approach to cope with computational complexities in a growing cloud market. Furthermore, a use case example proofs CloudGenius’ applicability. To conduct experiments, we implemented CumulusGenius, a prototype of the selection algorithm and the GA deployable on hadoop clusters. Experiments with CumulusGenius give insights on time complexities and the quality of the GA.

Proceedings ArticleDOI
14 Jan 2015
TL;DR: Ur/Web is presented, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption.
Abstract: The World Wide Web has evolved gradually from a document delivery platform to an architecture for distributed programming. This largely unplanned evolution is apparent in the set of interconnected languages and protocols that any Web application must manage. This paper presents Ur/Web, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications. Ur/Web's model is unified, where programs in a single programming language are compiled to other "Web standards" languages as needed; supports novel kinds of encapsulation of Web-specific state; and exposes simple concurrency, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption. We give a tutorial introduction to the main features of Ur/Web and discuss the language implementation and the production Web applications that use it.

Proceedings ArticleDOI
13 Apr 2015
TL;DR: This paper formalizes the requirements for effective presentation of results for Web Table search as the diversified table selection problem and the structured table summarization problem, and shows that both problems are computationally intractable and present heuristic algorithms to solve them.
Abstract: The amount of information available on the Web has been growing dramatically, raising the importance of techniques for searching the Web Recently, Web Tables emerged as a model, which enables users to search for information in a structured way However, effective presentation of results for Web Table search requires (1) selecting a ranking of tables that acknowledges the diversity within the search result; and (2) summarizing the information content of the selected tables concisely but meaningful In this paper, we formalize these requirements as the diversified table selection problem and the structured table summarization problem We show that both problems are computationally intractable and, thus, present heuristic algorithms to solve them For these algorithms, we prove salient performance guarantees, such as near-optimality, stability, and fairness Our experiments with real-world collections of thousands of Web Tables highlight the scalability of our techniques We achieve improvements up to 50% in diversity and 10% in relevance over baselines for Web Table selection, and reduce the information loss induced by table summarization by up to 50% In a user study, we observed that our techniques are preferred over alternative solutions

Proceedings ArticleDOI
17 Aug 2015
TL;DR: Encore is presented, a system that harnesses cross-origin requests to measure Web filtering from a diverse set of vantage points without requiring users to install custom software, enabling longitudinal measurements from many vantage points.
Abstract: Despite the pervasiveness of Internet censorship, we have scant data on its extent, mechanisms, and evolution. Measuring censorship is challenging: it requires continual measurement of reachability to many target sites from diverse vantage points. Amassing suitable vantage points for longitudinal measurement is difficult; existing systems have achieved only small, short-lived deployments. We observe, however, that most Internet users access content via Web browsers, and the very nature of Web site design allows browsers to make requests to domains with different origins than the main Web page. We present Encore, a system that harnesses cross-origin requests to measure Web filtering from a diverse set of vantage points without requiring users to install custom software, enabling longitudinal measurements from many vantage points. We explain how Encore induces Web clients to perform cross-origin requests that measure Web filtering, design a distributed platform for scheduling and collecting these measurements, show the feasibility of a global-scale deployment with a pilot study and an analysis of potentially censored Web content, identify several cases of filtering in six months of measurements, and discuss ethical concerns that would arise with widespread deployment.

Book ChapterDOI
31 Jul 2015
TL;DR: An overview on recommender systems is presented, how to use Linked Open Data to build a new generation of semantics-aware recommendation engines is sketched and how the data is connected with each other to form the so called LOD cloud is sketches.
Abstract: The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We present an overview on recommender systems and we sketch how to use Linked Open Data to build a new generation of semantics-aware recommendation engines.

Proceedings ArticleDOI
13 Apr 2015
TL;DR: A new type of locator is proposed, named multi-locator, which selects the best locator among a candidate set of locators produced by different algorithms, based on a voting procedure that assigns different voting weights to different locator generation algorithms.
Abstract: The main reason for the fragility of web test cases is the inability of web element locators to work correctly when the web page DOM evolves. Web elements locators are used in web test cases to identify all the GUI objects to operate upon and eventually to retrieve web page content that is compared against some oracle in order to decide whether the test case has passed or not. Hence, web element locators play an extremely important role in web testing and when a web element locator gets broken developers have to spend substantial time and effort to repair it. While algorithms exist to produce robust web element locators to be used in web test scripts, no algorithm is perfect and different algorithms are exposed to different fragilities when the software evolves. Based on such observation, we propose a new type of locator, named multi-locator, which selects the best locator among a candidate set of locators produced by different algorithms. Such selection is based on a voting procedure that assigns different voting weights to different locator generation algorithms. Experimental results obtained on six web applications, for which a subsequent release was available, show that the multi-locator is more robust than the single locators (about -30% of broken locators w.r.t. the most robust kind of single locator) and that the execution overhead required by the multiple queries done with different locators is negligible (2-3% at most).

Journal ArticleDOI
TL;DR: The study concludes that Node.js offers client-server development integration, aiding code reusability in web applications, and is the perfect tool for developing fast, scalable network applications.
Abstract: We examine the implications of end-to-end web application development, in the social web era. The paper describes a distributed architecture, suitable for modern web application development, as well as the interactivity components associated with it. Furthermore, we conducted a series of stress tests, on popular server side technologies. The PHP/Apache stack was found inefficient to address the increasing demand in network traffic. Nginx was found more than 2.5 times faster in input/output (I/O) operations than Apache, whereas Node.js outperformed both. Node.js, although excellent in I/O operations and resource utilization, was found lacking in serving static files using its built in HTTP server, while Nginx performed great at this task. So, in order to address efficiency, an Nginx server could be placed in-front and proxy static file requests, allowing the Node.js processes to only handle dynamic content. Such a configuration can offer a better infrastructure in terms of efficiency and scalability, replacing the aged PHP/Apache stack. Furthermore we have found that building cross platform applications based on web technologies, is both feasible and highly productive, especially when addressing stationary and mobile devices, as well as the fragmentation among them. Our study concludes that Node.js offers client-server development integration, aiding code reusability in web applications, and is the perfect tool for developing fast, scalable network applications.

Journal ArticleDOI
TL;DR: The process successfully achieved the practical goal of identifying a candidate set of web mapping technologies for teaching web mapping, and revealed broader insights into web map design and education generally as well as ways to cope with evolving web maps technologies.
Abstract: The current pace of technological innovation in web mapping offers new opportunities and creates new challenges for web cartographers. The continual development of new technological solutions produces a fundamental tension: the more flexible and expansive web mapping options become, the more difficult it is to maintain fluency in the teaching and application of these technologies. We addressed this tension by completing a three-stage, empirical process for understanding how best to learn and implement contemporary web mapping technologies. To narrow our investigation, we focused upon education at the university level, rather than a professional production environment, and upon open-source client-side web mapping technologies, rather than complementary server-side or cloud-based technologies. The process comprised three studies: (1) a competitive analysis study of contemporary web mapping technologies, (2) a needs-assessment survey of web map designers/developers regarding past experiences with these technologies, and (3) a diary study charting the implementation of a subset of potentially viable technologies, as identified through the first two studies. The process successfully achieved the practical goal of identifying a candidate set of web mapping technologies for teaching web mapping, and also revealed broader insights into web map design and education generally as well as ways to cope with evolving web mapping technologies.

01 Jan 2015
TL;DR: This article, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web by leveraging Web-based languages and protocols.
Abstract: The Web of Things (WoT) extends the Internet of Things considering that each physical object can be accessed and controlled using Web-based languages and protocols. In this paper, we summarize ongoing work promoting the concept of avatar as a new virtual abstraction to extend physical objects on the Web. An avatar is an extensible and distributed runtime environment endowed with an autonomous behaviour. Avatars rely on Web languages, protocols and reasoning about semantic annotations to dynamically drive connected objects, exploit their capabilities and expose their functionalities as Web services. Avatars are also able to collaborate together in order to achieve complex tasks.

Journal ArticleDOI
TL;DR: In this paper, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web, where an avatar is an extensible and distributed runtime environment endowed with an autonomous behavior.
Abstract: The Web of Things extends the Internet of Things by leveraging Web-based languages and protocols to access and control each physical object. In this article, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web. An avatar is an extensible and distributed runtime environment endowed with an autonomous behavior. Avatars rely on Web languages, protocols, and reason about semantic annotations to dynamically drive connected objects, exploit their capabilities, and expose user-understandable functionalities as Web services. Avatars are also able to collaborate together to achieve complex tasks.

Journal ArticleDOI
TL;DR: This work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context and develops a mechanism that delivers quality SEO based on LDA and state-of-the-art Search Engine (SE) metrics.

Journal ArticleDOI
TL;DR: A systematic data-driven approach to assisting situational application development by proposing a technique to extract useful information from multiple sources to abstract service capabilities with a set tags that supports intuitive expression of user's desired composition goals by simple queries.
Abstract: The convergence of Services Computing and Web 2.0 gains a large space of opportunities to compose “situational” web applications from web-delivered services. However, the large number of services and the complexity of composition constraints make manual composition difficult to application developers, who might be non-professional programmers or even end-users. This paper presents a systematic data-driven approach to assisting situational application development. We first propose a technique to extract useful information from multiple sources to abstract service capabilities with a set tags. This supports intuitive expression of user’s desired composition goals by simple queries, without having to know underlying technical details. A planning technique then exploits composition solutions which can constitute the desired goals, even with some potential new interesting composition opportunities. A browser-based tool facilitates visual and iterative refinement of composition solutions, to finally come up with the satisfying outputs. A series of experiments demonstrate the efficiency and effectiveness of our approach.

Journal ArticleDOI
TL;DR: In this article, the authors performed a semi-structured interview with six web API developers and investigated how major web API providers organize their API evolution, and how this affects source code changes of their clients.

Proceedings ArticleDOI
29 Jun 2015
TL;DR: Everest is presented, a Web-based platform for researchers supporting publication, execution and composition of applications running across distributed computing resources, and follows the Platform as a Service (PaaS) cloud delivery model by providing all its functionality via remote Web and programming interfaces.
Abstract: Researchers increasingly rely on using web-based systems for accessing and running scientific applications across distributed computing resources. However existing systems lack a number of important features, such as publication and sharing of scientific applications as online services, decoupling of applications from computing resources and providing remote programmatic access. This paper presents Everest, a web-based platform for researchers supporting publication, execution and composition of applications running across distributed computing resources. Everest addresses the described challenges by relying on modern web technologies and cloud computing models. It follows the Platform as a Service (PaaS) cloud delivery model by providing all its functionality via remote web and programming interfaces. Any application added to Everest is automatically published both as a user-facing web form and a web service. Another distinct feature of Everest is the ability to attach external computing resources by any user and flexibly use these resources for running applications. The paper provides an overview of the platform's architecture and its main components, describes recent developments, presents results of experimental evaluation of the platform and discusses remaining challenges.

Proceedings ArticleDOI
28 Sep 2015
TL;DR: The performance of application of web services for Enterprise Application Integration (EAI) based on SOAP and REST is compared and response time are considered as a metrics parameter for evaluation.
Abstract: Web Services are common means to exchange data and information over the network Web Services make themselves available over the internet, where technology and platform are independent Once web services are built it is accessed via uniform resource locator (URL) and their functionalities can be utilized in the application domain Web services are self-contained, modular, distributed and dynamic in nature These web services are described and then published in Service Registry eg, UDDI and then they are invoked over the Internet Web Services are basic Building blocks of Services Oriented Architecture (SOA) These web services can be developed based on two interaction styles such as Simple Object Access Protocol (SOAP) and Representational State Transfer Protocol (REST) It is important to select appropriate interaction styles ie, either SOAP or REST for building Web Sevices Choosing service interaction style is an important architectural decision for designers and developers, as it influences the underlying requirements for implementing web service solutions In this study, the performance of application of web services for Enterprise Application Integration (EAI) based on SOAP and REST is compared Since web services operate over network throughput and response time are considered as a metrics parameter for evaluation

Journal ArticleDOI
TL;DR: The most downloaded 45 WA extensions for Mozilla Firefox and Google Chrome are appraised as well as a systematic literature review to identify what quality issues received the most attention in the literature.
Abstract: Today’s web personalization technologies use approaches like user categorization, configuration, and customization but do not fully support individualized requirements. As a significant portion of our social and working interactions are migrating to the web, we can expect an increase in these kinds of minority requirements. Browser-side transcoding holds the promise of facilitating this aim by opening personalization to third parties through web augmentation (WA), realized in terms of extensions and userscripts. WA is to the web what augmented reality is to the physical world: to layer relevant content/layout/navigation over the existing web to improve the user experience. From this perspective, WA is not as powerful as web personalization since its scope is limited to the surface of the web. However, it permits this surface to be tuned by developers other than the sites’ webmasters. This opens up the web to third parties who might come up with imaginative ways of adapting the web surface for their own purposes. Its success is backed up by millions of downloads. This work looks at this phenomenon, delving into the “what,” the “why,” and the “what for” of WA, and surveys the challenges ahead for WA to thrive. To this end, we appraise the most downloaded 45 WA extensions for Mozilla Firefox and Google Chrome as well as conduct a systematic literature review to identify what quality issues received the most attention in the literature. The aim is to raise awareness about WA as a key enabler of the personal web and point out research directions.

Journal ArticleDOI
TL;DR: This article proposes a new mixed integer linear program to represent the QoS-aware web service composition problem with a polynomial number of variables and constraints and presents some experimental results showing that the proposed model is able to solve big size instances.

Journal ArticleDOI
TL;DR: A substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services, and some approaches available for semantically annotating functional and non-functional aspects of web services are identified.
Abstract: ContextSemantically annotating web services is gaining more attention as an important aspect to support the automatic matchmaking and composition of web services. Therefore, the support of well-known and agreed ontologies and tools for the semantical annotation of web services is becoming a key concern to help the diffusion of semantic web services. ObjectiveThe objective of this systematic literature review is to summarize the current state-of-the-art for supporting the semantical annotation of web services by providing answers to a set of research questions. MethodThe review follows a predefined procedure that involves automatically searching well-known digital libraries. As a result, a total of 35 primary studies were identified as relevant. A manual search led to the identification of 9 additional primary studies that were not reported during the automatic search of the digital libraries. Required information was extracted from these 44 studies against the selected research questions and finally reported. ResultsOur systematic literature review identified some approaches available for semantically annotating functional and non-functional aspects of web services. However, many of the approaches are either not validated or the validation done lacks credibility. ConclusionWe believe that a substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services.

Journal ArticleDOI
TL;DR: BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment.
Abstract: Recent years have seen massive, distributed datasets become the norm in neuroimaging research, and the methodologies used analyze them have, in response, become more collaborative and exploratory. Tools and infrastructure are continuously being developed and deployed to facilitate research in this context: grid computation platforms to process the data, distributed data stores to house and share them, high-speed networks to move them around and collaborative, often web-based, platforms to provide access to and sometimes manage the entire system. BrainBrowser is a lightweight, high-performance JavaScript visualization library built to provide easy-to-use, powerful, on-demand visualization of remote datasets in this new research environment. BrainBrowser leverages modern Web technologies, such as WebGL, HTML5 and Web Workers, to visualize 3D surface and volumetric neuroimaging data in any modern web browser without requiring any browser plugins. It is thus trivial to integrate BrainBrowser into any web-based platform. BrainBrowser is simple enough to produce a basic web-based visualization in a few lines of code, while at the same time being robust enough to create full-featured visualization applications. BrainBrowser can dynamically load the data required for a given visualization, so no network bandwidth needs to be waisted on data that will not be used. BrainBrowser's integration into the standardized web platform also allows users to consider using 3D data visualization in novel ways, such as for data distribution, data sharing and dynamic online publications. BrainBrowser is already being used in two major online platforms, CBRAIN and LORIS, and has been used to make the 1TB MACACC dataset openly accessible.

Proceedings ArticleDOI
27 Jun 2015
TL;DR: A case study of evolving Web APIs to investigate what changes are made between versions and how the changes are documented and communicated to the API users and a list of recommendations for practitioners and researchers based on API change profiles, versioning, documentation and communication approaches.
Abstract: When applications are integrated using web APIs, changes on a web API may break the dependent applications. This problem exists because old versions of the APIs may no longer be supported, a lack of adequate documentation to upgrade to a newer version, and insufficient communication of changes. In this paper we conducted a case study of evolving Web APIs to investigate what changes are made between versions and how the changes are documented and communicated to the API users. The findings are a list of recommendations for practitioners and researchers based on API change profiles, versioning, documentation and communication approaches that are observed in practice. This study will help inform developers of evolving Web APIs to make decision about versioning, documentation and communication methods.