scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2018"


Journal ArticleDOI
TL;DR: A taxonomy of auto-scalers according to the identified challenges and key properties is presented and new future directions that can be explored in this area are proposed.
Abstract: Web application providers have been migrating their applications to cloud data centers, attracted by the emerging cloud computing paradigm. One of the appealing features of the cloud is elasticity. It allows cloud users to acquire or release computing resources on demand, which enables web application providers to automatically scale the resources provisioned to their applications without human intervention under a dynamic workload to minimize resource cost while satisfying Quality of Service (QoS) requirements. In this article, we comprehensively analyze the challenges that remain in auto-scaling web applications in clouds and review the developments in this field. We present a taxonomy of auto-scalers according to the identified challenges and key properties. We analyze the surveyed works and map them to the taxonomy to identify the weaknesses in this field. Moreover, based on the analysis, we propose new future directions that can be explored in this area.

172 citations


Journal ArticleDOI
TL;DR: This paper introduces QoS evaluation models, and proposes the mathematical model of QoS applied to Web service composition optimizing problem, and presents a knowledge based differential evolution algorithm used to solve Web service compositions optimizing problem.
Abstract: Web service composition problem has been a hot topic recently. With the development of cloud computing technology, a single Web service can no longer meet the users’ requirement. However, service composition gives a proper way to solve this problem. A knowledge based differential evolution algorithm for Web service composition was proposed in this paper. Firstly, we introduce QoS evaluation models, and propose the mathematical model of QoS applied to Web service composition optimizing problem. Secondly, we present a knowledge based differential evolution algorithm used to solve Web service composition optimizing problem. The algorithm improves the accelerate convergence velocity by importing structure knowledge. Finally, simulation experiments and evaluation methodology are given, and the results prove KDE has better performance in Web service composition problem, compared with original DE, PSO.

22 citations


Journal ArticleDOI
TL;DR: The feasibility of ontology-driven automation of web service development that is to be a core element in the deployment of heterogeneous district-wide energy management software is assessed.

22 citations


Journal ArticleDOI
TL;DR: This paper proposes an approach for defining and evolving Web Augmentation requirements using rich visual prototypes and textual descriptions that can be automatically mapped onto running software artifacts that are usually hard to interpret by scripters.
Abstract: Web Applications are accessed by millions of users with different needs, goals, concerns, or preferences. Several well-known Web Applications provide personalized features, e.g., they recommend specific content to users by contemplating individual characteristics or requirements. However, since most Web Application cannot consider all users’ requirements, many developers started to create their own mechanisms for adapting existing applications. One of the most popular techniques for third-party applications adaptation is Web Augmentation, which is based on the alteration of its original user interface, generally by using scripts running at the client side (e.g., the browser). In the context of Web Augmentation, two user roles have emerged: scripters who are those users able to create a new augmentation artifact, and end users without programming skills, that just consume the artifacts that may satisfy totally or partially their needs. Scripters and end users generally do not know each other, and they have rarely a contact, beyond the fact that they use the same script repositories. When end users cannot get their needs covered with existing artifacts, they claim for new ones by specifying their requirements (called Web Augmentation requirements) using textual descriptions, which are usually hard to interpret by scripters. Web Augmentation requirements are a very particular kind of Web requirements for which there partially exist a solution implemented by the Web site owner, but still users need to change or augment that implementation with very specific purposes that they desire to be available in such site. In this paper, we propose an approach for defining and evolving Web Augmentation requirements using rich visual prototypes and textual descriptions that can be automatically mapped onto running software artifacts. We present a tool implemented to support this approach, and we show an evaluation of both the approach and the tool.

21 citations


Journal ArticleDOI
TL;DR: This paper presents a framework for adaptive streaming of interactive Web 3D scenes to web clients using the MPEG-DASH standard, and offers an analysis of how the standard’s Media Presentation Description schema can be used to describe adaptive Web3D scenes for streaming.
Abstract: Modern Web 3D technologies allow us to display complex interactive 3D content, including models, textures, sounds and animations, using any HTML-enabled web browser. Thus, due to the device-independent nature of HTML5, the same content might have to be displayed on a wide range of different devices and environments. This means that the display of Web 3D content is faced with the same Quality of Experience (QoE) issues as other multimedia types, concerning bandwidth, computational capabilities of the end device, and content quality. In this paper, we present a framework for adaptive streaming of interactive Web 3D scenes to web clients using the MPEG-DASH standard. We offer an analysis of how the standard’s Media Presentation Description schema can be used to describe adaptive Web 3D scenes for streaming, and explore the types of metrics that can be used to maximize the user’s QoE. Then, we present a prototype client we have developed, and demonstrate how the 3D streaming process can take place over such a client. Finally, we discuss how the client framework can be used to design adaptive streaming policies that correspond to real-world scenarios.

18 citations


Journal ArticleDOI
TL;DR: A quality model that captures the particularities of social Web applications when used on mobile devices is introduced and a relevance of performance variables at different levels of granularity in a mobile quality requirements tree is uncovered.
Abstract: When used in a mobile ecosystem, social Web applications are commonly criticized due to their poor quality. We believe this is accounted for by the inadequacy of current approaches for their evaluation as well as the lack of suitable quality models. With an objective to address the aforementioned issues, this paper introduces a quality model that captures the particularities of social Web applications when used on mobile devices. Drawing on the comprehensive literature review, a finite set of performance variables (items, attributes, and categories) that contribute to the mobile quality of social Web applications was identified and subsequently employed for the purpose of designing a conceptual model in the form of a mobile quality requirements tree. An empirical study was then carried out to assess the reliability and validity of the conceptual model and pertaining measuring instrument. During the study, participants accomplished predefined scenarios of interaction with a representative sample of social Web applications for collaborative writing and evaluated their mobile quality by completing the post-use questionnaire. An analysis of data collected from end users uncovered a relevance of performance variables at different levels of granularity in a mobile quality requirements tree as well as pros and cons of evaluated collaborative editors.

15 citations


Journal ArticleDOI
TL;DR: A knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies is presented.
Abstract: Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.

9 citations


Journal ArticleDOI
TL;DR: The main functionalities of the current ArchiveWeb system for searching, constructing, exploring, and discussing web archive collections are described and the feedback received from archiving organizations and libraries are summarized.
Abstract: Curated web archive collections contain focused digital content which is collected by archiving organizations, groups, and individuals to provide a representative sample covering specific topics and events to preserve them for future exploration and analysis. In this paper, we discuss how to best support collaborative construction and exploration of these collections through the ArchiveWeb system. ArchiveWeb has been developed using an iterative evaluation-driven design-based research approach, with considerable user feedback at all stages. The first part of this paper describes the important insights we gained from our initial requirements engineering phase during the first year of the project and the main functionalities of the current ArchiveWeb system for searching, constructing, exploring, and discussing web archive collections. The second part summarizes the feedback we received on this version from archiving organizations and libraries, as well as our corresponding plans for improving and extending the system for the next release.

8 citations


Journal ArticleDOI
TL;DR: A thorough survey of the most relevant published proposals on web services privacy during transactions identified 20 works that address privacy related problems in web services consumption.
Abstract: The web service computing paradigm has introduced great benefits to the growth of e-markets, both under the customer to business and the business to business models The value capabilities allowed by the conception of web services, such as interoperability, efficiency, just-in-time integration, etc, have made them the most common way of doing business online With the maturation of the web services underlying functional properties and facilitating standards, and with the proliferation of the amounts of data they use and they generate, researchers and practitioners have been dedicating considerable efforts to the related emerging privacy concerns The literature contains number of research works on these privacy concerns, each addressing them from a different focal point We have explored the available literature on web services privacy during transactions, to present, in this paper, a thorough survey of the most relevant published proposals We identified 20 works that address privacy related problems in web services consumption We categorize them based on the approach they take and we compare them based on a proposed evaluation framework, derived from the adopted techniques and addressed requirements

7 citations


Book ChapterDOI
20 May 2018
TL;DR: A self-adaptive approach to the context-aware web service composition named SADICO for Self ADaptIve to the web service COmposition is proposed, based on MAPE model to ensure self- Adaptation.
Abstract: Web service Compositions are rapidly gaining acceptance as a fundamental technology in the web field They are becoming the cutting edge of communication between different applications all over the web With the need for the ubiquitous computing and the pervasive use of mobile devices, the context aware web service composition becomes a hot topic The latter aims at adapting the web service composition behavior according to the user’s context, such as his specific working environment, language, type of Internet connection, devices and preferences Many solutions have been proposed in this area Nevertheless, the adaptation was carried out only at the run-time and it partially covered the user’s general context In this paper, we proposed a self-adaptive approach to the context-aware web service composition named SADICO for Self ADaptIve to the web service COmposition Our approach studied a generic context, based on MAPE model (Monitoring, Analysis, Planning, and Execution) to ensure self-adaptation

6 citations


Journal ArticleDOI
TL;DR: The co-clustering results of the experimental dataset reveal a number of interesting and interpretable connectivity structural patterns among web objects, which are useful for more comprehensive understanding of web page architecture and provide valuable data for e-commerce, social networking, search engine, etc.
Abstract: Web objects are the entities retrieved from websites by users to compose the web pages. Therefore, exploring the relationships among web objects has theoretical and practical significance for many important applications, such as content recommendation, web page classification, and network security. In this paper, we propose a graph model named Bipartite Request Dependency Graph (BRDG) to investigate the relationships among web objects. To build the BRDG from massive network traffic data, we design and implement a parallel algorithm by leveraging the MapReduce programming model. Based on the study of a number of BRDGs derived from real wireless network traffic datasets, we find that the BRDG is large, sparse and complex, implying that it is very hard to derive the structural characteristics of the BRDG. Towards this end, we propose a co-clustering algorithm to decompose and extract coherent co-clusters from the BRDG. The co-clustering results of the experimental dataset reveal a number of interesting and interpretable connectivity structural patterns among web objects, which are useful for more comprehensive understanding of web page architecture and provide valuable data for e-commerce, social networking, search engine, etc.

Proceedings Article
24 May 2018
TL;DR: A simple but flexible server-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web services.
Abstract: Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web services. The resulting framework provides a novel foundation for developing maintainable and secure web applications.

Book ChapterDOI
01 Jan 2018
TL;DR: Web 2.0 technologies have various benefits by enhancing the opportunities for business collaboration and by sharing knowledge through online communities of practice toward gaining improved organizational performance.
Abstract: This chapter describes the overview of Web 2.0 technologies; Web 2.0 applications in learning and education; Web 2.0 applications in academic libraries; Web 2.0 applications in Knowledge Management (KM); the perspectives of Health Information Technology (health IT); the multifaceted applications of health IT; IT and Technology Acceptance Model (TAM); and the significance of health IT in the health care industry. Web 2.0 is the platform of the network which spans all connected services so that users can utilize them more efficiently. Web 2.0 technologies have various benefits by enhancing the opportunities for business collaboration and by sharing knowledge through online communities of practice toward gaining improved organizational performance. Health IT includes utilizing technology to electronically store, protect, retrieve, and transfer the information in modern health care. Health IT has great potential to improve the quality, safety, and efficiency of health care services in the health care industry.

DOI
27 Feb 2018
TL;DR: It is shown that non-expert users can, and for the most part enjoy, creating the mappings required for IILR, and described the different behaviors observed of participants and relate these behaviors to the survey data from users.
Abstract: Information integration with local radiance (IILR) is a system designed for use in web development frameworks that allows for the creation of polymorphic widgets based on small schema fragments and mappings to local schema that allow non-expert users to instantiate these widgets in their site. Here, we present results of a user study using IILR. We show that non-expert users can, and for the most part enjoy, creating the mappings required for our system. We describe the different behaviors observed of our participants and relate these behaviors to the survey data from our users.

Book ChapterDOI
01 Jan 2018
TL;DR: An Agile and Collaborative Model-Driven Development framework for web applications (AC-MDD Framework) is presented, aiming to increase productivity by generating source code from models and also reducing the waste of resources on the modeling and documenting stages of a web application.
Abstract: Given the needs to investigate and present new solutions that combine agile modeling practices, MDD, and collaborative development for clients and developers to successfully create web applications, this paper goal is to present an Agile and Collaborative Model-Driven Development framework for web applications (AC-MDD Framework). Such framework aims to increase productivity by generating source code from models and also reducing the waste of resources on the modeling and documenting stages of a web application. To fulfill this goal, we have used new visual constructs from a new UML profile called Agile Modelling Language for Web Applications (WebAgileML) and the Web-ACMDD Method to operate the AC-MDD Framework. The methodology of this paper was successfully applied to an academic project, proving the feasibility of our new framework, method, and profile proposed.

28 Jul 2018
TL;DR: In this article, the authors present a web service architecture for providing e-learning and dedicated information system facilities, based on a single-sign on authentication framework and database synchronization facilities.
Abstract: The paper presents the design and implementation of a web service architecture for providing e-learning and dedicated information system facilities. The web portal we describe is a solution for information system integration, based on a single-sign on authentication framework and database synchronization facilities. The solution is based on MS technology and provides integrated e-learning and dedicated information systems facilities. User category permissions for accessing the portal services – both for the genuine portal functionalities and for the dedicated information system ones – are modelled based on specific groups that are retained in a global database. We consider that the solution we propose has a good generality degree and may be applied in various organization cases.

11 Aug 2018
TL;DR: In this paper, a meta-model based approach is proposed to support end-user development of web applications to support their business processes, where end-users can actively participate in web application development using tools to populate and instantiate the meta model.
Abstract: End-user development is proposed as a solution to the issues business organisations face when developing web applications to support their business processes. We are proposing a meta-model based development approach to support End-User Development. End-users can actively participate in web application development using tools to populate and instantiate the meta-model. The meta-model has three abstraction levels: Shell, Application and Function. At Shell Level, we model aspects common to all business web applications such as navigation and access control. At Application Level, we model aspects common to specific web applications such as workflows. At Function Level, we model requirements specific to the identified use cases. In this paper we discuss how we have solved the issues in application development for business end-users such as need for central repository of data, common log in, optimizing user model, application portability and balance between “Do it Yourself” (DIY) and professional developers in hierarchical meta-model approach. These solutions are being incorporated into Component based EApplication Development and Deployment Shell (CBEADS) version 4, supporting meta-model implementation. We believe that these solutions will help end-users to efficiently and effectively develop web applications using meta-model based development approach.

Book ChapterDOI
01 Jan 2018
TL;DR: The results demonstrate the enormous dependence on search engine coverage for quantifying the web size of companies, the strong correlation between the various mention metrics, the considerable inconsistencies in the different external sources of usage metrics, and how certain formal aspects can affect the perception of content quality and how they are optimized in search results.
Abstract: The main objectives of this chapter are the extraction and analysis of a wide range of web metrics (size, mention, usage, and formal aspects) relating to the web spaces of a sample of 184 international biotechnology companies, using a set of horizontal web sources. The central theme of the chapter is not the analysis of the biotechnology sector in itself but rather the study of the properties of the web metrics obtained from various statistical analyses. The results demonstrate the enormous dependence on search engine coverage for quantifying the web size of companies, the strong correlation between the various mention metrics, the considerable inconsistencies in the different external sources of usage metrics, and how certain formal aspects (such as page speed or website usability) can affect the perception of content quality and how they are optimized in search results. All the metrics that we obtained were very unstable and too dependent on the data source and on the precision of the available search commands. Statistically, both the nonlinear nature of the data and the existence of outliers must also be borne in mind in order to properly interpret the results.

Journal ArticleDOI
TL;DR: The results of an evaluation of 2,000 web services indicate that resource dependency processing can be up to a factor of two faster compared to a traditional processing approach while an average model fit of 97 percent allows an accurate prediction.
Abstract: The upsurge of mobile devices paired with highly interactive social web applications generates enormous amounts of requests web services have to deal with. Consequently in our previous work, a novel request flow scheme with scalable components was proposed for storing interdependent, permanently updated resources in a database. The major challenge is to process dependencies in an optimal fashion while maintaining dependency constraints. In this work, three research objectives are evaluated by examining resource dependencies and their key graph measurements. An all-sources longest-path algorithm is presented for efficient processing and dependencies are analysed to find correlations between performance and graph measures. Two algorithms basing their parameters on six real-world web service structures, e.g., Facebook Graph API are developed to generate dependency graphs and a model is developed to estimate performance based on resource parameters. An evaluation of four graph series discusses performance effects of different graph structures. The results of an evaluation of 2,000 web services with over 850 thousand resources and 6 million requests indicate that resource dependency processing can be up to a factor of two faster compared to a traditional processing approach while an average model fit of 97 percent allows an accurate prediction.

Journal ArticleDOI
TL;DR: The Web service selection method is proposed to select the atomic service and a set of services with correlations to meet users' functional and QoS (quality of service) requirements.
Abstract: How to realize Web service organization and management quickly and accurately, and build an effective service selection mechanism to choose services with correlations to meet users' functional and non-functional requests, and thus to meet the individual and dynamic changing requirements is a key problem in the Service-Oriented Software Engineering (SOSE). The Web service and ontology information are stored into RDB (relational database) in our method, it realizes Web service aggregation and selection in term of service interface (Input and Output) and execution capability (Precondition and Effect). Firstly, the Web service clustering method based on self-join operation in RDB is proposed to cluster services efficiency. Then it uses the abstract service extraction method to get abstract services, and uses Web service aggregation approach based on join operation to organize the clustered services. Finally, the Web service selection method is proposed to select the atomic service and a set of services with correlations to meet users' functional and QoS (quality of service) requirements. In addition, the case study and experiments are used to explain and verify the effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: This is the first attempt to analyze medical related nomenclature using R for web sites classification and also pair of java tools called “website” and “webscrap” were developed to automatically get pages and for further analysis of the downloaded pages.
Abstract: With the phenomenal growth of World Wide Web and the huge number of web sites with so many web pages, finding relevant information becomes quite difficult for the web users. Web site promoters also need to ensure that their web site is on the top of the search results for better market share. Finding the right combination of keywords and placing them at the appropriate places may help in bringing the web site on the top of search results when a search engine searches the web sites and creates indexes. Over the years web site optimization research has reported encouraging results. Despite promising results, data mining techniques for web sites classification have hardly been applied. An appropriate implementation of classification of web pages may help designing the web pages accordingly. Here in this work data mining using R’s web scraping technique has been used for performing extraction of web pages and also pair of java tools called “website” and “webscrap” were developed to automatically get pages and for further analysis of the downloaded pages. This is the first attempt to analyze medical related nomenclature using.

10 Aug 2018
TL;DR: A conceptual framework for the evaluation of mechanisms for security engineering for web applications is developed, which is not limited to the web, and also has a focus on security.
Abstract: This paper reports on the progress of the author’s PhD in the area of security engineering for web applications. Initially, the work was located at the beginning of the Software Development Life Cycle (SDLC) with a focus on design. However, designing a perfectly secure application is worth nothing, if it is not possible for security engineers to choose appropriate methods, notations and tools (so called mechanisms) to work with in each phase of the SDLC. Therefore, we2 additionally develop a conceptual framework for the evaluation of these Mechanisms, which is not limited to the web, and also has a focus on security. At the moment, almost two-thirds of the work for the PhD is done, which means that most underlying ideas are written down but further case studies and evaluations will follow.

16 Aug 2018
TL;DR: This paper presents a verification toolkit whose design and implementation exploit the Web service architectural paradigm and describes the architectural design and the discuss in detail the current implementation efforts.
Abstract: Web services allow the components of applications to be highly decentralized, dynamically reconfigurable. Moreover, Web services can interoperate easily inside an heterogeneous network environment. The vast majority of current available verification environments have been built by sticking to traditional architectural styles. Hence, they are centralized and none of them deal with interoperability and dynamic reconfigurability. In this paper we present a verification toolkit whose design and implementation exploit the Web service architectural paradigm. We describe the architectural design and the discuss in detail the current implementation efforts.

17 Aug 2018
TL;DR: This work proposes a model that is called MaaS (for Multimedia as a Service), through which multimedia content providers expose their content, through which content access is done through a concept hierarchy.
Abstract: Multimedia content is derived from various autonomous, distributed and heterogeneous content sources. To address problems posed by content sources heterogeneity, a service-oriented architecture is proposed to assure a dynamic integration of multimedia content. In this work we propose a model that we call MaaS (for Multimedia as a Service), through which multimedia content providers expose their content. Once the MaaS is discovered, its classification into category of concepts, based on domain ontology, is made. However, content access is done through a concept hierarchy. The sport domain is used to validate the proposed model.