scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2013"


Journal ArticleDOI
TL;DR: Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces, which allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows.
Abstract: Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

1,562 citations


Journal ArticleDOI
TL;DR: This paper proposes a collaborative quality-of-service (QoS) prediction approach for web services by taking advantages of the past web service usage experiences of service users, and achieves higher prediction accuracy than other approaches.
Abstract: With the increasing presence and adoption of web services on the World Wide Web, the demand of efficient web service quality evaluation approaches is becoming unprecedentedly strong. To avoid the expensive and time-consuming web service invocations, this paper proposes a collaborative quality-of-service (QoS) prediction approach for web services by taking advantages of the past web service usage experiences of service users. We first apply the concept of user-collaboration for the web service QoS information sharing. Then, based on the collected QoS data, a neighborhood-integrated approach is designed for personalized web service QoS value prediction. To validate our approach, large-scale real-world experiments are conducted, which include 1,974,675 web service invocations from 339 service users on 5,825 real-world web services. The comprehensive experimental studies show that our proposed approach achieves higher prediction accuracy than other approaches. The public release of our web service QoS data set provides valuable real-world data for future research.

408 citations


Journal ArticleDOI
TL;DR: This work presents three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).
Abstract: Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

301 citations


Proceedings ArticleDOI
25 Mar 2013
TL;DR: This paper applies existing methods for web optimization in a novel manner, such that these methods can be combined with unique knowledge that is only available at the edge (Fog) nodes to improve a user's web page rendering performance.
Abstract: In this paper, we consider web optimization within Fog Computing context. We apply existing methods for web optimization in a novel manner, such that these methods can be combined with unique knowledge that is only available at the edge (Fog) nodes. More dynamic adaptation to the user's conditions (eg. network status and device's computing load) can also be accomplished with network edge specific knowledge. As a result, a user's web page rendering performance is improved beyond that achieved by simply applying those methods at the web server or CDNs.

274 citations


Journal ArticleDOI
TL;DR: WebProtégé is a lightweight ontology editor and knowledge acquisition tool for the Web that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise.
Abstract: In this paper, we present WebProtege---a lightweight ontology editor and knowledge acquisition tool for the Web. With the wide adoption of Web 2.0 platforms and the gradual adoption of ontologies and Semantic Web technologies in the real world, we need ontology-development tools that are better suited for the novel ways of interacting, constructing and consuming knowledge. Users today take Web-based content creation and online collaboration for granted. WebProtege integrates these features as part of the ontology development process itself. We tried to lower the entry barrier to ontology development by providing a tool that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise. The declarative user interface enabled us to create custom knowledge-acquisition forms tailored for domain experts. We built WebProtege using the existing Protege infrastructure, which supports collaboration on the back end side, and the Google Web Toolkit for the front end. The generic and extensible infrastructure allowed us to easily deploy WebProtege in production settings for several projects. We present the main features of WebProtege and its architecture and describe briefly some of its uses for real-world projects. WebProtege is free and open source. An online demo is available at http://webprotege.stanford.edu.

190 citations


Journal ArticleDOI
TL;DR: The Group on Earth Observation Model Web initiative utilizes a Model as a Service approach to increase model access and sharing, and a flexible architecture, capable of integrating different existing distributed computing infrastructures, is required to address the performance requirements.
Abstract: The Group on Earth Observation (GEO) Model Web initiative utilizes a Model as a Service approach to increase model access and sharing. It relies on gradual, organic growth leading towards dynamic webs of interacting models, analogous to the World Wide Web. The long term vision is for a consultative infrastructure that can help address "what if" and other questions that decision makers and other users have. Four basic principles underlie the Model Web: open access, minimal barriers to entry, service-driven, and scalability; any implementation approach meeting these principles will be a step towards the long term vision. Implementing a Model Web encounters a number of technical challenges, including information modelling, minimizing interoperability agreements, performance, and long term access, each of which has its own implications. For example, a clear information model is essential for accommodating the different resources published in the Model Web (model engines, model services, etc.), and a flexible architecture, capable of integrating different existing distributed computing infrastructures, is required to address the performance requirements. Architectural solutions, in keeping with the Model Web principles, exist for each of these technical challenges. There are also a variety of other key challenges, including difficulties in making models interoperable; calibration and validation; and social, cultural, and institutional constraints. Although the long term vision of a consultative infrastructure is clearly an ambitious goal, even small steps towards that vision provide immediate benefits. A variety of activities are now in progress that are beginning to take those steps.

145 citations


Proceedings ArticleDOI
27 Apr 2013
TL;DR: The principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables are described.
Abstract: Advances in data mining and knowledge discovery have transformed the way Web sites are designed. However, while visual presentation is an intrinsic part of the Web, traditional data mining techniques ignore render-time page structures and their attributes. This paper introduces design mining for the Web: using knowledge discovery techniques to understand design demographics, automate design curation, and support data-driven design tools. This idea is manifest in Webzeitgeist, a platform for large-scale design mining comprising a repository of over 100,000 Web pages and 100 million design elements. This paper describes the principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables.

141 citations


01 Jan 2013
TL;DR: This work introduces AToMPM, an open-source framework for designing domain-specific modeling environments, performing model transformations, manipulating and managing models, which is independent from any operating system, platform, or device it may execute on.
Abstract: We introduce AToMPM, an open-source framework for designing domain-specific modeling environments, performing model transformations, manipulating and managing models. It runs completely over the web, making it independent from any operating system, platform, or device it may execute on. AToMPM offers an online collaborative experience for modeling. Its unique architecture makes the framework flexible and completely customizable, given that AToMPM is modeled by itself, and external applications can be easily integrated. Demo: https://www.youtube.com/watch?v=iBbdpmpwn6M

130 citations


Book ChapterDOI
30 Jul 2013
TL;DR: This article presents an overview of the Linked Data lifecycle and discusses individual approaches as well as the state-of-the-art with regard to extraction, authoring, linking, enrichment as wellAs quality of Linked data.
Abstract: With Linked Data, a very pragmatic approach towards achieving the vision of the Semantic Web has gained some traction in the last years. The term Linked Data refers to a set of best practices for publishing and interlinking structured data on the Web. While many standards, methods and technologies developed within by the Semantic Web community are applicable for Linked Data, there are also a number of specific characteristics of Linked Data, which have to be considered. In this article we introduce the main concepts of Linked Data. We present an overview of the Linked Data lifecycle and discuss individual approaches as well as the state-of-the-art with regard to extraction, authoring, linking, enrichment as well as quality of Linked Data. We conclude the chapter with a discussion of issues, limitations and further research and development challenges of Linked Data. This article is an updated version of a similar lecture given at Reasoning Web Summer School 2011.

126 citations


Proceedings Article
01 Jan 2013
TL;DR: Hydra, a small vocabulary to describe Web APIs that aims to simplify the development of truly RESTful services by leveraging the power of Linked Data, is developed.
Abstract: with the ever-increasing amount of data becomes increasingly challenging. To alleviate the information overload put on people, systems are progressively being connected directly to each other. They exchange, analyze, and manipulate humongous amounts of data without any human interaction. Most current solutions, however, do not exploit the whole potential of the architecture of the World Wide Web and completely ignore the possibilities offered by Semantic Web technologies. Based on the experiences gained by implementing and analyzing various RESTful APIs and drawing from the longer history of Semantic Web research we developed Hydra, a small vocabulary to describe Web APIs. It aims to simplify the development of truly RESTful services by leveraging the power of Linked Data. By breaking the descriptions down into small independent fragments, a new breed of interoperable Web APIs using decentralized, reusable, and composable contracts can be realized.

120 citations


Journal ArticleDOI
TL;DR: This article provides a comprehensive and comparative overview of approaches to modeling argumentation for the Social Semantic Web from theoretical foundational models to Social Web tools for argumentation, following the path to a global World Wide Argument Web.
Abstract: Argumentation represents the study of views and opinions that humans express with the goal of reaching a conclusion through logical reasoning Since the 1950's, several models have been proposed to capture the essence of informal argumentation in different settings With the emergence of the Web, and then the Semantic Web, this modeling shifted towards ontologies, while from the development perspective, we witnessed an important increase in Web 20 human-centered collaborative deliberation tools Through a review of more than 150 scholarly papers, this article provides a comprehensive and comparative overview of approaches to modeling argumentation for the Social Semantic Web We start from theoretical foundational models and investigate how they have influenced Social Web tools We also look into Semantic Web argumentation models Finally we end with Social Web tools for argumentation, including online applications combining Web 20 and Semantic Web technologies, following the path to a global World Wide Argument Web

Proceedings ArticleDOI
18 May 2013
TL;DR: X-PERT is a new automated, precise, and comprehensive approach for XBI detection that combines several new and existing differencing techniques and is based on the findings from an extensive study of XBIs in real-world web applications.
Abstract: Due to the increasing popularity of web applications, and the number of browsers and platforms on which such applications can be executed, cross-browser incompatibilities (XBIs) are becoming a serious concern for organizations that develop web-based software. Most of the techniques for XBI detection developed to date are either manual, and thus costly and error-prone, or partial and imprecise, and thus prone to generating both false positives and false negatives. To address these limitations of existing techniques, we developed X-PERT, a new automated, precise, and comprehensive approach for XBI detection. X-PERT combines several new and existing differencing techniques and is based on our findings from an extensive study of XBIs in real-world web applications. The key strength of our approach is that it handles each aspects of a web application using the differencing technique that is best suited to accurately detect XBIs related to that aspect. Our empirical evaluation shows that X-PERT is effective in detecting real-world XBIs, improves on the state of the art, and can provide useful support to developers for the diagnosis and (eventually) elimination of XBIs.

Journal ArticleDOI
TL;DR: A new similarity measure for web service similarity computation is presented and a novel collaborative filtering approach is proposed, called normal recovery collaborative filtering, for personalized web service recommendation that achieves better accuracy than other competing approaches.
Abstract: With the increasing amount of web services on the Internet, personalized web service selection and recommendation are becoming more and more important. In this paper, we present a new similarity measure for web service similarity computation and propose a novel collaborative filtering approach, called normal recovery collaborative filtering, for personalized web service recommendation. To evaluate the web service recommendation performance of our approach, we conduct large-scale real-world experiments, involving 5,825 real-world web services in 73 countries and 339 service users in 30 countries. To the best of our knowledge, our experiment is the largest scale experiment in the field of service computing, improving over the previous record by a factor of 100. The experimental results show that our approach achieves better accuracy than other competing approaches.

Journal ArticleDOI
TL;DR: BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb) and wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming.
Abstract: Motivation: Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. Results: BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. Availability and implementation: BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

Journal ArticleDOI
TL;DR: The applicability of Web Crawler in the field of web search and a review on Web crawler to different problem domains in web search is discussed.
Abstract: Information Retrieval deals with searching and retrieving information within the documents and it also searches the online databases and internet. Web crawler is defined as a program or software which traverses the Web and downloads web documents in a methodical, automated manner. Based on the type of knowledge, web crawler is usually divided in three types of crawling techniques: General Purpose Crawling, Focused crawling and Distributed Crawling. In this paper, the applicability of Web Crawler in the field of web search and a review on Web Crawler to different problem domains in web search is discussed.

Proceedings ArticleDOI
28 Jun 2013
TL;DR: This paper proposes a novel approach that dynamically recommends Web services that fit users' interests that combines collaborative filtering and content-based recommendation using a three-way aspect model.
Abstract: With increasing adoption and presence of Web services, designing novel approaches for efficient Web services recommendation has become steadily more important. Existing Web services discovery and recommendation approaches focus on either perishing UDDI registries, or keyword-dominant Web service search engines, which possess many limitations such as insufficient recommendation performance and heavy dependence on the input from users such as preparing complicated queries. In this paper, we propose a novel approach that dynamically recommends Web services that fit users' interests. Our approach is a hybrid one in the sense that it combines collaborative filtering and content-based recommendation. In particular, our approach considers simultaneously both rating data and content data of Web services using a three-way aspect model. Unobservable user preferences are represented by introducing a set of latent variables, which is statistically estimated. To verify the proposed approach, we conduct experiments using 3, 693 real-world Web services. The experimental results show that our approach outperforms the two conventional methods on recommendation performance.

Journal ArticleDOI
TL;DR: The results of this systematic mapping study can help researchers to obtain an overview of existing web application testing approaches and indentify areas in the field that require more attention from the research community.
Abstract: Context The Web has had a significant impact on all aspects of our society. As our society relies more and more on the Web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 147 papers in the area of web application testing, which have appeared between 2000 and 2011. Objective As this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends in this specialized field. Method We review and structure the body of knowledge related to web application testing through a systematic mapping (SM) study. As part of this study, we pose two sets of research questions, define selection and exclusion criteria, and systematically develop and refine a classification schema. In addition, we conduct a bibliometrics analysis of the papers included in our study. Results Our study includes a set of 79 papers (from the 147 retrieved papers) published in the area of web application testing between 2000 and 2011. We present the results of our systematic mapping study. Our mapping data is available through a publicly-accessible repository. We derive the observed trends, for instance, in terms of types of papers, sources of information to derive test cases, and types of evaluations used in papers. We also report the demographics and bibliometrics trends in this domain, including top-cited papers, active countries and researchers, and top venues in this research area. Conclusion We discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing web application testing approaches and indentify areas in the field that require more attention from the research community.

Journal ArticleDOI
TL;DR: Two personalized reliability prediction approaches of Web services are proposed, that is, neighborhood-based approach and model- based approach, which employs past failure data of similar neighbors to predict the Web service reliability.
Abstract: Service Oriented Architecture (SOA) is a business-centric IT architectural approach for building distributed systems. Reliability of service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet connections. Designing efficient and effective reliability prediction approaches of Web services has become an important research issue. In this article, we propose two personalized reliability prediction approaches of Web services, that is, neighborhood-based approach and model-based approach. The neighborhood-based approach employs past failure data of similar neighbors (either service users or Web services) to predict the Web service reliability. On the other hand, the model-based approach fits a factor model based on the available Web service failure data and use this factor model to make further reliability prediction. Extensive experiments are conducted with our real-world Web service datasets, which include about 23 millions invocation results on more than 3,000 real-world Web services. The experimental results show that our proposed reliability prediction approaches obtain better reliability prediction accuracy than other competing approaches.

Book
15 Jun 2013
TL;DR: An approach will be presented that was developed within the EU funded project “OSIRIS” that offers mechanisms to search for sensors, exploit basic semantic relationships, harvest sensor metadata and integrate sensor discovery into already existing catalogues.
Abstract: This paper addresses the discovery of sensors within the OGC Sensor Web Enablement framework. Whereas services like the OGC Web Map Service or Web Coverage Service are already well supported through catalogue services, the field of sensor networks and the according discovery mechanisms is still a challenge. The focus within this article will be on the use of existing OGC Sensor Web components for realizing a discovery solution. After discussing the requirements for a Sensor Web discovery mechanism, an approach will be presented that was developed within the EU funded project “OSIRIS”. This solution offers mechanisms to search for sensors, exploit basic semantic relationships, harvest sensor metadata and integrate sensor discovery into already existing catalogues.

Journal ArticleDOI
TL;DR: The approach involves three web services that cooperate to achieve production goals using the domain web services and maintains a semantic model of the current state of the system, which is automatically updated based on event notifications sent by the domain services.
Abstract: This paper presents an approach to using semantic web services in managing production processes. In particular, the devices in the production systems considered expose web service interfaces through which they can then be controlled, while semantic web service descriptions formulated in web ontology language for services (OWL-S) make it possible to determine the conditions and effects of invoking the web services. The approach involves three web services that cooperate to achieve production goals using the domain web services. In particular, one of the three services maintains a semantic model of the current state of the system, while another uses the model to compose the domain web services so that they jointly achieve the desired goals. The semantic model of the system is automatically updated based on event notifications sent by the domain services.

Journal ArticleDOI
TL;DR: It is shown that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data.
Abstract: Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas of mathematics, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants. We argue that such scenarios will benefit from Semantic Web technology.Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge.Mathematical knowledge comprises structures given in a logical language --formulae, statements e.g. axioms, and theo-ries --, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and counter-examples. Our review of vocabularies for representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics.Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data.We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections.

Journal ArticleDOI
TL;DR: A web page design support database is developed based on a user-centered experimental procedure and a neural network model that can be used to examine how a specific combination of design elements, particularly the ratio of graphics to text, will affect the users' feelings about a web page.
Abstract: This paper addresses new and significant research issues in web page design in relation to the use of graphics. The original findings include that (a) graphics play an important role in enhancing the appearance and thus users' feelings (aesthetics) about web pages and that (b) the effective use of graphics is crucial in designing web pages. In addition, we have developed a web page design support database based on a user-centered experimental procedure and a neural network model. This design support database can be used to examine how a specific combination of design elements, particularly the ratio of graphics to text, will affect the users' feelings about a web page. As a general rule, the ratio of graphics to text between 3:1 and 1:1 will give the users the best feelings of ease-to-use and clear-to-follow. A web page with a ratio of 1:1 will have the most realistic look, while a ratio of over 3:1 will have the fanciest appearance. The result provides useful insights in using graphics on web pages that help web designers best meet users' specific expectations and aesthetic consistency.

Proceedings ArticleDOI
13 May 2013
TL;DR: Data-Fu, a lightweight declarative rule language with state transition systems as formal grounding is introduced, which enables the development of data-driven applications that facilitate the RESTful manipulation of read/write Linked Data resources.
Abstract: An increasing amount of applications build their functionality on the utilisation and manipulation of web resources. Consequently REST gains popularity with a resource-centric interaction architecture that draws its flexibility from links between resources. Linked Data offers a uniform data model for REST with self-descriptive resources that can be leveraged to avoid a manual ad-hoc development of web-based applications. For declaratively specifying interactions between web resources we introduce Data-Fu, a lightweight declarative rule language with state transition systems as formal grounding. Data-Fu enables the development of data-driven applications that facilitate the RESTful manipulation of read/write Linked Data resources. Furthermore, we describe an interpreter for Data-Fu as a general purpose engine that allows to perform described interactions with web resources by orders of magnitude faster than a comparable Linked Data processor.

Proceedings ArticleDOI
04 Nov 2013
TL;DR: This paper presents a novel approach to securing legacy web applications by automatically and statically rewriting an application so that the code and data are clearly separated in its web pages, which protects the application and its users from a large range of server-side cross-site scripting attacks.
Abstract: Web applications are constantly under attack. They are popular, typically accessible from anywhere on the Internet, and they can be abused as malware delivery systems.Cross-site scripting flaws are one of the most common types of vulnerabilities that are leveraged to compromise a web application and its users. A large set of cross-site scripting vulnerabilities originates from the browser's confusion between data and code. That is, untrusted data input to the web application is sent to the clients' browser, where it is then interpreted as code and executed. While new applications can be designed with code and data separated from the start, legacy web applications do not have that luxury.This paper presents a novel approach to securing legacy web applications by automatically and statically rewriting an application so that the code and data are clearly separated in its web pages. This transformation protects the application and its users from a large range of server-side cross-site scripting attacks. Moreover, the code and data separation can be efficiently enforced at run time via the Content Security Policy enforcement mechanism available in modern browsers.We implemented our approach in a tool, called deDacota, that operates on binary ASP.NET applications. We demonstrate on six real-world applications that our tool is able to automatically separate code and data, while keeping the application's semantics unchanged.

Journal ArticleDOI
TL;DR: This study investigates the development and trend of Semantic Web applications in the built environment and finds progress is being made from often too-common ontological concepts to more innovative concepts such as Linked Data.
Abstract: The built environment sector impacts significantly on communities. At the same time, it is the sector with the highest cost and environmental saving potentials provided effective strategies are implemented. The emerging Semantic Web promises new opportunities for efficient management of information and knowledge about various domains. While other domains, particularly bioinformatics have fully embraced the Semantic Web, knowledge about how the same has been applied to the built environment is sketchy. This study investigates the development and trend of Semantic Web applications in the built environment. Understanding the different applications of the Semantic Web is essential for evaluation, improvement and opening of new research. A review of over 120 refereed articles on built environment Semantic Web applications has been conducted. A classification of the different Semantic Web applications in relation to their year of application is presented to highlight the trend. Two major findings have emerged. Firstly, despite limited research about easy-to-use applications, progress is being made from often too-common ontological concepts to more innovative concepts such as Linked Data. Secondly, a shift from traditional construction applications to Semantic Web sustainable construction applications is gradually emerging. To conclude, research challenges, potential future development and research directions have been discussed.

Proceedings ArticleDOI
02 Dec 2013
TL;DR: A formal Linked Data Visualization Model (LDVM) is devised, which allows to dynamically connect data with visualizations and enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked data.
Abstract: Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.

01 Jan 2013
TL;DR: This paper provides overview and comparison of the web i.e. Web 1.0, Web 2.0), which was described as a five generations of the internet, and Generations characteristics and information are introduced and compared.
Abstract: This paper provides overview and comparison of the web i.e. Web 1.0, Web 2.0, Web 3.0, Web 4.0 and web 5.0 were described as a five generations of the web. Generations characteristics and information are introduced and compared. There is not any specific research about web generation from the web advent but it is an analytical distinction that outlined qualities of web

Patent
25 Feb 2013
TL;DR: In this article, the authors present a server-based application configured to produce web pages for a web site in accordance with input received from a user, and an interface to the server based application receiving selections of features which are available to be added to the web site.
Abstract: Embodiments of the present disclosure provide systems and methods for facilitating network communications. Briefly described, one embodiment of the system, among others, includes a server-based application configured to produce web pages for a web site in accordance with input received from a user; and an interface to the server-based application receiving selections of features which are available to be added to the web site in response to user prompts and to set access rights on which features are to be available to different roles of users. Other systems and methods are also provided.

Journal ArticleDOI
TL;DR: A semantic Web service discovery framework for finding semantic Web services by making use of natural language processing techniques, which shows that the three proposed matching algorithms are able to effectively perform matching and approximate matching.
Abstract: This paper proposes a semantic Web service discovery framework for finding semantic Web services by making use of natural language processing techniques. The framework allows searching through a set of semantic Web services in order to find a match with a user query consisting of keywords. By specifying the search goal using keywords, end-users do not need to have knowledge about semantic languages, which makes it easy to express the desired semantic Web services. For matching keywords with semantic Web service descriptions given in WSMO, techniques like part-of-speech tagging, lemmatization, and word sense disambiguation are used. After determining the senses of relevant words gathered from Web service descriptions and the user query, a matching process takes place. The performance evaluation shows that the three proposed matching algorithms are able to effectively perform matching and approximate matching.

Book
Wei Tan1, MengChu Zhou
05 Mar 2013
TL;DR: This informative reference features application scenarios that include healthcare and biomedical applications, such as personalized healthcare processing, DNA sequence data processing, and electrocardiogram wave analysis, and presents updated research and development results on the composition technologies of web services.
Abstract: Focuses on how to use web service computing and service-based workflow technologies to develop timely, effective workflows for both business and scientific fieldsUtilizing web computing and Service-Oriented Architecture (SOA), Business and Scientific Workflows: A Web ServiceOriented Approach focuses on how to design, analyze, and deploy web servicebased workflows for both business and scientific applications in many areas of healthcare and biomedicine. It also discusses and presents the recent research and development results.This informative reference features application scenarios that include healthcare and biomedical applications, such as personalized healthcare processing, DNA sequence data processing, and electrocardiogram wave analysis, and presents:Updated research and development results on the composition technologies of web services for ever-sophisticated service requirements from various users and communitiesFundamental methods such as Petri nets and social network analysis to advance the theory and applications of workflow design and web service compositionPractical and real applications of the developed theory and methods for such platforms as personalized healthcare and Biomedical Informatics GridsThe authors' efforts on advancing service composition methods for both business and scientific software systems, with theoretical and empirical contributionsWith workflow-driven service composition and reuse being a hot topic in both academia and industry, this book is ideal for researchers, engineers, scientists, professionals, and students who work on service computing, software engineering, business and scientific workflow management, the internet, and management information systems (MIS).