scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 2015"


Journal ArticleDOI
TL;DR: This article establishes a consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools and reviews the state of the art in service composition from an unprecedented, holistic perspective.
Abstract: Web services are a consolidated reality of the modern Web with tremendous, increasing impact on everyday computing tasks. They turned the Web into the largest, most accepted, and most vivid distributed computing platform ever. Yet, the use and integration of Web services into composite services or applications, which is a highly sensible and conceptually non-trivial task, is still not unleashing its full magnitude of power. A consolidated analysis framework that advances the fundamental understanding of Web service composition building blocks in terms of concepts, models, languages, productivity support techniques, and tools is required. This framework is necessary to enable effective exploration, understanding, assessing, comparing, and selecting service composition models, languages, techniques, platforms, and tools. This article establishes such a framework and reviews the state of the art in service composition from an unprecedented, holistic perspective.

277 citations


Journal ArticleDOI
01 May 2015
TL;DR: This survey introduces necessary background and fundamentals to understand current efforts in IoT, WoT and SWoT by reviewing key enabling technologies and addresses associated challenges and highlight potential research to be perused in future.
Abstract: Currently, a large number of smart objects and different types of devices are interconnected and communicate via Internet Protocol that creates a worldwide ubiquitous and pervasive network called the Internet of Things (IoT). With an increase in the deployment of smart objects, IoT is expected to have a significant impact on human life in the near future. A majorbreakthrough in bridging the gap between virtual and physical worlds came from the vision of the Web of Things (WoT), which employs open Web standards in achieving information sharing and objects interoperability. Social Web of Things (SWoT) further extends WoT to integrate smart objects with social networks and is observed to not only bridge between physical and virtual worlds but also facilitate continued interaction between physical devices and human. This makes SWoT the most promising approach and has now become an active research area.This survey introduces necessary background and fundamentals to understand current efforts in IoT, WoT and SWoT by reviewing key enabling technologies. These efforts are investigated in detail from several different perspectives such as architecture design, middleware, platform, systems implementation, and application in hand. Moreover, a large number of platforms and applications are analyzed and evaluated from various alternatives have become popular during the past decade. Finally, we address associated challenges and highlight potential research to be perused in future.

243 citations


Journal ArticleDOI
TL;DR: The processing of the simple datasets used in the pilot proved to be relatively straightforward using a combination of R, RPy2, PyWPS and PostgreSQL, but the use of NoSQL databases and more versatile frameworks such as OGC standard based implementations may provide a wider and more flexible set of features that particularly facilitate working with larger volumes and more heterogeneous data sources.
Abstract: Recent evolutions in computing science and web technology provide the environmental community with continuously expanding resources for data collection and analysis that pose unprecedented challenges to the design of analysis methods, workflows, and interaction with data sets. In the light of the recent UK Research Council funded Environmental Virtual Observatory pilot project, this paper gives an overview of currently available implementations related to web-based technologies for processing large and heterogeneous datasets and discuss their relevance within the context of environmental data processing, simulation and prediction. We found that, the processing of the simple datasets used in the pilot proved to be relatively straightforward using a combination of R, RPy2, PyWPS and PostgreSQL. However, the use of NoSQL databases and more versatile frameworks such as OGC standard based implementations may provide a wider and more flexible set of features that particularly facilitate working with larger volumes and more heterogeneous data sources. We review web service related technologies to manage, transfer and process Big Data.We examine international standards and related implementations.Many existing algorithms can be easily exposed as services and cloud-enabled.The adoption of standards facilitate the implementation of workflows.Use of web technologies to tackle environmental issues is acknowledged worldwide.

203 citations


Book
11 Jun 2015
TL;DR: In this article, the evolution of web surveys, applications and related practices is discussed, including pre-fielding mode elaboration, sampling questionnaire preparation, nonresponse strategy, and nonresponse response strategy.
Abstract: Chapter 1: Survey research and web surveys Definition and typology Web survey process Evolution of web surveys, applications and related practices Chapter 2: Pre-fielding Mode elaboration Sampling Questionnaire preparation Technical preparations Nonresponse strategy General management Chapter 3: Fielding Recruiting Measurement Processing and monitoring Chapter 4: Post-fielding Data preparation Preliminary results Data exporting and documentation Chapter 5: Selected topics in web survey implementation Smartphones, tablets and other devices Online panels Web survey software Chapter 6: Broader context of web surveys Broader methodological context Web surveys within the project management framework The web survey profession Web survey bibliography Chapter 7: Future of web surveys General technological developments Web survey software Methodology Broader business and societal issues Chapter 8: Conclusions

191 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach that unifies collaborative filtering and content-based recommendation of web services using a probabilistic generative model, which outperforms the state-of-the-art methods on recommendation performance.
Abstract: The last decade has witnessed a tremendous growth of web services as a major technology for sharing data, computing resources, and programs on the web. With increasing adoption and presence of web services, designing novel approaches for efficient and effective web service recommendation has become of paramount importance. Most existing web service discovery and recommendation approaches focus on either perishing UDDI registries, or keyword-dominant web service search engines, which possess many limitations such as poor recommendation performance and heavy dependence on correct and complex queries from users. It would be desirable for a system to recommend web services that align with users’ interests without requiring the users to explicitly specify queries. Recent research efforts on web service recommendation center on two prominent approaches: collaborative filtering and content-based recommendation . Unfortunately, both approaches have some drawbacks, which restrict their applicability in web service recommendation. In this paper, we propose a novel approach that unifies collaborative filtering and content-based recommendations. In particular, our approach considers simultaneously both rating data (e.g., QoS) and semantic content data (e.g., functionalities) of web services using a probabilistic generative model. In our model, unobservable user preferences are represented by introducing a set of latent variables, which can be statistically estimated. To verify the proposed approach, we conduct experiments using 3,693 real-world web services. The experimental results show that our approach outperforms the state-of-the-art methods on recommendation performance.

171 citations


Book
15 Jun 2015
TL;DR: In this paper, the authors present a practical guide to learn how to use Python scripts and web APIs to gather and process data from thousands-or even millions-of web pages at once.
Abstract: Learn web scraping and crawling techniques to access unlimited data from any web source in any format. With this practical guide, you'll learn how to use Python scripts and web APIs to gather and process data from thousands-or even millions-of web pages at once. Ideal for programmers, security professionals, and web administrators familiar with Python, this book not only teaches basic web scraping mechanics, but also delves into more advanced topics, such as analyzing raw data or using scrapers for frontend website testing. Code samples are available to help you understand the concepts in practice.

116 citations


Proceedings ArticleDOI
14 Dec 2015
TL;DR: This work synthesize and highlight the most relevant work regarding ontology methodologies, engineering, best practices and tools that could be applied to Internet of Things (IoT).
Abstract: We discuss in this paper, semantic web methodologies, best practices and recommendations beyond the IERC Cluster Semantic Interoperability Best Practices and Recommendations (IERC AC4). The semantic web community designed best practices and methodologies which are unknown from the IoT community. In this paper, we synthesize and highlight the most relevant work regarding ontology methodologies, engineering, best practices and tools that could be applied to Internet of Things (IoT). To the best of our knowledge, this is the first work aiming at bridging such methodologies to the IoT community and go beyond the IERC AC4 cluster. This research is being applied to three uses cases: (1) the M3 framework assisting IoT developers in designing interoperable ontology-based IoT applications, (2) the FIESTA-IoT EU project encouraging semantic interoperability within IoT, and (3) a collaborative publication of legacy ontologies.

99 citations


Journal ArticleDOI
TL;DR: The practicalities and effectiveness of web mining as a research method for innovation studies are examined, using web mining to explore the R&D activities of 296 UK-based green goods small and mid-size enterprises.
Abstract: As enterprises expand and post increasing information about their business activities on their websites, website data promises to be a valuable source for investigating innovation. This article examines the practicalities and effectiveness of web mining as a research method for innovation studies. We use web mining to explore the R&D activities of 296 UK-based green goods small and mid-size enterprises. We find that website data offers additional insights when compared with other traditional unobtrusive research methods, such as patent and publication analysis. We examine the strengths and limitations of enterprise innovation web mining in terms of a wide range of data quality dimensions, including accuracy, completeness, currency, quantity, flexibility and accessibility. We observe that far more companies in our sample report undertaking R&D activities on their web sites than would be suggested by looking only at conventional data sources. While traditional methods offer information about the early phases of R&D and invention through publications and patents, web mining offers insights that are more downstream in the innovation process. Handling website data is not as easy as alternative data sources, and care needs to be taken in executing search strategies. Website information is also self-reported and companies may vary in their motivations for posting (or not posting) information about their activities on websites. Nonetheless, we find that web mining is a significant and useful complement to current methods, as well as offering novel insights not easily obtained from other unobtrusive sources.

88 citations


Journal ArticleDOI
TL;DR: In an effort to understand the web of FOSS features and capabilities, many of the state-of-the-art FOSS software projects are reviewed in the context of those used to develop water resources web apps published in the peer-reviewed literature in the last decade.
Abstract: Water resources web applications or "web apps" are growing in popularity as a means to overcome many of the challenges associated with hydrologic simulations in decision-making. Water resources web apps fall outside of the capabilities of standard web development software, because of their spatial data components. These spatial data needs can be addressed using a combination of existing free and open source software (FOSS) for geographic information systems (FOSS4G) and FOSS for web development. However, the abundance of FOSS projects that are available can be overwhelming to new developers. In an effort to understand the web of FOSS features and capabilities, we reviewed many of the state-of-the-art FOSS software projects in the context of those that have been used to develop water resources web apps published in the peer-reviewed literature in the last decade (2004-2014). Free and open source software can be used to address the needs of water resources web applications.The large number of open source projects can be overwhelming to novice developers.We present a review of the free and open source GIS and web development software.Software reviewed includes those projects used to develop 45 water resources and earth science web applications.The review highlights 11 FOSS4G software projects and 9 FOSS projects for web development.

79 citations


Proceedings ArticleDOI
14 Jan 2015
TL;DR: Ur/Web is presented, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption.
Abstract: The World Wide Web has evolved gradually from a document delivery platform to an architecture for distributed programming. This largely unplanned evolution is apparent in the set of interconnected languages and protocols that any Web application must manage. This paper presents Ur/Web, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications. Ur/Web's model is unified, where programs in a single programming language are compiled to other "Web standards" languages as needed; supports novel kinds of encapsulation of Web-specific state; and exposes simple concurrency, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption. We give a tutorial introduction to the main features of Ur/Web and discuss the language implementation and the production Web applications that use it.

74 citations


Journal ArticleDOI
01 Sep 2015
TL;DR: This paper introduces a project to develop a reliable, cost‐effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents.
Abstract: This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.

Proceedings ArticleDOI
17 Aug 2015
TL;DR: Encore is presented, a system that harnesses cross-origin requests to measure Web filtering from a diverse set of vantage points without requiring users to install custom software, enabling longitudinal measurements from many vantage points.
Abstract: Despite the pervasiveness of Internet censorship, we have scant data on its extent, mechanisms, and evolution. Measuring censorship is challenging: it requires continual measurement of reachability to many target sites from diverse vantage points. Amassing suitable vantage points for longitudinal measurement is difficult; existing systems have achieved only small, short-lived deployments. We observe, however, that most Internet users access content via Web browsers, and the very nature of Web site design allows browsers to make requests to domains with different origins than the main Web page. We present Encore, a system that harnesses cross-origin requests to measure Web filtering from a diverse set of vantage points without requiring users to install custom software, enabling longitudinal measurements from many vantage points. We explain how Encore induces Web clients to perform cross-origin requests that measure Web filtering, design a distributed platform for scheduling and collecting these measurements, show the feasibility of a global-scale deployment with a pilot study and an analysis of potentially censored Web content, identify several cases of filtering in six months of measurements, and discuss ethical concerns that would arise with widespread deployment.

Book ChapterDOI
31 Jul 2015
TL;DR: An overview on recommender systems is presented, how to use Linked Open Data to build a new generation of semantics-aware recommendation engines is sketched and how the data is connected with each other to form the so called LOD cloud is sketches.
Abstract: The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We present an overview on recommender systems and we sketch how to use Linked Open Data to build a new generation of semantics-aware recommendation engines.

Proceedings ArticleDOI
18 May 2015
TL;DR: An analysis of 50 definitions of web accessibility extracted six core concepts that are used in many definitions, which are incorporated into a unified definition ofweb accessibility as "all people, particularly disabled and older people, can use websites in a range of contexts of use, including mainstream and assistive technologies".
Abstract: To better understand what researchers and practitioners consider to be the key components of the definition of web accessibility and to propose a unified definition of web accessibility, we conducted an analysis of 50 definitions of web accessibility. The definitions were drawn from a range of books, papers, standards, guidelines and online sources, aimed at both practitioners and researchers, from the across the time period of web accessibility work, from 1996 to 2014 and from authors in 21 different countries. The analysis extracted six core concepts that are used in many definitions, which are incorporated into a unified definition of web accessibility as "all people, particularly disabled and older people, can use websites in a range of contexts of use, including mainstream and assistive technologies; to achieve this, websites need to be designed and developed to support usability across these contexts".

Journal ArticleDOI
TL;DR: The process successfully achieved the practical goal of identifying a candidate set of web mapping technologies for teaching web mapping, and revealed broader insights into web map design and education generally as well as ways to cope with evolving web maps technologies.
Abstract: The current pace of technological innovation in web mapping offers new opportunities and creates new challenges for web cartographers. The continual development of new technological solutions produces a fundamental tension: the more flexible and expansive web mapping options become, the more difficult it is to maintain fluency in the teaching and application of these technologies. We addressed this tension by completing a three-stage, empirical process for understanding how best to learn and implement contemporary web mapping technologies. To narrow our investigation, we focused upon education at the university level, rather than a professional production environment, and upon open-source client-side web mapping technologies, rather than complementary server-side or cloud-based technologies. The process comprised three studies: (1) a competitive analysis study of contemporary web mapping technologies, (2) a needs-assessment survey of web map designers/developers regarding past experiences with these technologies, and (3) a diary study charting the implementation of a subset of potentially viable technologies, as identified through the first two studies. The process successfully achieved the practical goal of identifying a candidate set of web mapping technologies for teaching web mapping, and also revealed broader insights into web map design and education generally as well as ways to cope with evolving web mapping technologies.

01 Jan 2015
TL;DR: This article, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web by leveraging Web-based languages and protocols.
Abstract: The Web of Things (WoT) extends the Internet of Things considering that each physical object can be accessed and controlled using Web-based languages and protocols. In this paper, we summarize ongoing work promoting the concept of avatar as a new virtual abstraction to extend physical objects on the Web. An avatar is an extensible and distributed runtime environment endowed with an autonomous behaviour. Avatars rely on Web languages, protocols and reasoning about semantic annotations to dynamically drive connected objects, exploit their capabilities and expose their functionalities as Web services. Avatars are also able to collaborate together in order to achieve complex tasks.

Journal ArticleDOI
08 Apr 2015-Corpora
TL;DR: This study investigates the distribution of registers on the web through a bottom-up user-based investigation of a large, representative corpus of web documents, based on a much larger corpus than those used in previous research, and obtained through random sampling from across the full range of documents that are publically available on the searchable web.
Abstract: One major challenge for Web-As-Corpus research is that a typical Web search provides little information about the register of the documents that are searched. Previous research has attempted to address this problem (e.g., through the Automatic Genre Identification initiative), but with only limited success. As a result, we currently know surprisingly little about the distribution of registers on the web. In this study, we tackle this problem through a bottom-up user-based investigation of a large, representative corpus of web documents. We base our investigation on a much larger corpus than those used in previous research (48,571 web documents), and obtained through random sampling from across the full range of documents that are publically available on the searchable web. Instead of relying on individual expert coders, we recruit typical end-users of the Web for register coding, with each document in the corpus coded by four different raters. End-users identify basic situational characteristics of each w...

Journal ArticleDOI
TL;DR: In this paper, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web, where an avatar is an extensible and distributed runtime environment endowed with an autonomous behavior.
Abstract: The Web of Things extends the Internet of Things by leveraging Web-based languages and protocols to access and control each physical object. In this article, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web. An avatar is an extensible and distributed runtime environment endowed with an autonomous behavior. Avatars rely on Web languages, protocols, and reason about semantic annotations to dynamically drive connected objects, exploit their capabilities, and expose user-understandable functionalities as Web services. Avatars are also able to collaborate together to achieve complex tasks.

Journal ArticleDOI
TL;DR: This work discusses the evolution of search engine ranking factors in a Web 2.0 and Web 3.0 context and develops a mechanism that delivers quality SEO based on LDA and state-of-the-art Search Engine (SE) metrics.

Journal ArticleDOI
TL;DR: In this article, the authors performed a semi-structured interview with six web API developers and investigated how major web API providers organize their API evolution, and how this affects source code changes of their clients.

Proceedings ArticleDOI
28 Sep 2015
TL;DR: The performance of application of web services for Enterprise Application Integration (EAI) based on SOAP and REST is compared and response time are considered as a metrics parameter for evaluation.
Abstract: Web Services are common means to exchange data and information over the network Web Services make themselves available over the internet, where technology and platform are independent Once web services are built it is accessed via uniform resource locator (URL) and their functionalities can be utilized in the application domain Web services are self-contained, modular, distributed and dynamic in nature These web services are described and then published in Service Registry eg, UDDI and then they are invoked over the Internet Web Services are basic Building blocks of Services Oriented Architecture (SOA) These web services can be developed based on two interaction styles such as Simple Object Access Protocol (SOAP) and Representational State Transfer Protocol (REST) It is important to select appropriate interaction styles ie, either SOAP or REST for building Web Sevices Choosing service interaction style is an important architectural decision for designers and developers, as it influences the underlying requirements for implementing web service solutions In this study, the performance of application of web services for Enterprise Application Integration (EAI) based on SOAP and REST is compared Since web services operate over network throughput and response time are considered as a metrics parameter for evaluation

Journal ArticleDOI
TL;DR: The most downloaded 45 WA extensions for Mozilla Firefox and Google Chrome are appraised as well as a systematic literature review to identify what quality issues received the most attention in the literature.
Abstract: Today’s web personalization technologies use approaches like user categorization, configuration, and customization but do not fully support individualized requirements. As a significant portion of our social and working interactions are migrating to the web, we can expect an increase in these kinds of minority requirements. Browser-side transcoding holds the promise of facilitating this aim by opening personalization to third parties through web augmentation (WA), realized in terms of extensions and userscripts. WA is to the web what augmented reality is to the physical world: to layer relevant content/layout/navigation over the existing web to improve the user experience. From this perspective, WA is not as powerful as web personalization since its scope is limited to the surface of the web. However, it permits this surface to be tuned by developers other than the sites’ webmasters. This opens up the web to third parties who might come up with imaginative ways of adapting the web surface for their own purposes. Its success is backed up by millions of downloads. This work looks at this phenomenon, delving into the “what,” the “why,” and the “what for” of WA, and surveys the challenges ahead for WA to thrive. To this end, we appraise the most downloaded 45 WA extensions for Mozilla Firefox and Google Chrome as well as conduct a systematic literature review to identify what quality issues received the most attention in the literature. The aim is to raise awareness about WA as a key enabler of the personal web and point out research directions.

Journal ArticleDOI
TL;DR: A substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services, and some approaches available for semantically annotating functional and non-functional aspects of web services are identified.
Abstract: ContextSemantically annotating web services is gaining more attention as an important aspect to support the automatic matchmaking and composition of web services. Therefore, the support of well-known and agreed ontologies and tools for the semantical annotation of web services is becoming a key concern to help the diffusion of semantic web services. ObjectiveThe objective of this systematic literature review is to summarize the current state-of-the-art for supporting the semantical annotation of web services by providing answers to a set of research questions. MethodThe review follows a predefined procedure that involves automatically searching well-known digital libraries. As a result, a total of 35 primary studies were identified as relevant. A manual search led to the identification of 9 additional primary studies that were not reported during the automatic search of the digital libraries. Required information was extracted from these 44 studies against the selected research questions and finally reported. ResultsOur systematic literature review identified some approaches available for semantically annotating functional and non-functional aspects of web services. However, many of the approaches are either not validated or the validation done lacks credibility. ConclusionWe believe that a substantial amount of work remains to be done to improve the current state of research in the area of supporting semantic web services.

Journal ArticleDOI
TL;DR: Just as Internet connectivity has enabled intuitive information sharing and interaction through the Web, so the Internet of Things might be the basis for the Web of Things, enabling equally simple interaction among devices, systems, users, and applications.
Abstract: Just as Internet connectivity has enabled intuitive information sharing and interaction through the Web, so the Internet of Things might be the basis for the Web of Things, enabling equally simple interaction among devices, systems, users, and applications.

Journal ArticleDOI
TL;DR: MAUVE is described, a software environment for Web site accessibility and usability evaluation that allows checking both HTML and CSS to detect accessibility issues and is able to validate dynamic sites as well, based on the use of a set of plugins for the most popular browsers.
Abstract: During the last decade, Web site accessibility and usability have become increasingly important. Consequently, many tools have been developed for automatic or semi-automatic evaluation of Web site accessibility. Unfortunately, most of them have not been updated over time to keep up with the evolution of accessibility standards and guidelines, thus soon becoming obsolete. Furthermore, the increasing importance of CSS in the definition of modern Web page layout, and the increasing use of scripting technologies in dynamic and interactive Web sites, has led to new challenges in automatic accessibility evaluation that few of the existing tools are able to face. This paper describes MAUVE, a software environment for Web site accessibility and usability evaluation. The tool is characterized by the possibility to specify and update the guidelines that should be validated without requiring changes in the tool implementation. It is based on an XML-based language for Web Guidelines Definition. It allows checking both HTML and CSS to detect accessibility issues and is able to validate dynamic sites as well, based on the use of a set of plugins for the most popular browsers.

BookDOI
01 Jan 2015
TL;DR: This book demonstrates how to create and interlink five-star Linked Open Data to reach a wider audience, encourage data reuse, and provide content that can be automatically processed with full certainty.
Abstract: Mastering Structured Data on the Semantic Web explains the practical aspects and the theory behind the Semantic Web and how structured data, such as HTML5 Microdata and JSON-LD annotations, can be used to improve your site’s performance on next-generation Search Engine Result Pages and be displayed on Google Knowledge Panels. You will learn how to represent data in a machineinterpretable form, using the Resource Description Framework (RDF), the cornerstone of the Semantic Web. You will see how to store and manipulate RDF data in ways that benefit Big Data applications, such as the Google Knowledge Graph, Wikidata, or Facebook’s Social Graph. The book also covers the most important tools for manipulating RDF data, including, but not limited to, Protégé, TopBraid Composer, Sindice, Apache Marmotta, Callimachus, and Tabulator. You will learn to use the Apache Jena and Sesame APIs for rapid Semantic Web application development. Mastering Structured Data on the Semantic Web demonstrates how to create and interlink five-star Linked Open Data to reach a wider audience, encourage data reuse, and provide content that can be automatically processed with full certainty. The book is for web developers and search engine optimization (SEO) experts who want to learn state-of-the-art SEO methods. The book will also benefit researchers interested in automatic knowledge discovery.

Journal ArticleDOI
TL;DR: An overview of current approaches to service composition according to a set of features is given and related core problems and future directions of service composition mechanisms are pointed out.
Abstract: Web Service composition is becoming the most promising way for business-to-business systems integration. However, current mechanisms for service composition entail a trade-off on multiple and complex factors. Thereby, existing solutions based on business Web Services, semantic Web Services, or the recent RESTful services, lack of a standardized adoption. This paper gives an overview of current approaches according to a set of features. Moreover, related core problems and future directions of service composition mechanisms are pointed out.

Book
01 Jul 2015
TL;DR: Mastering Structured Data on the Semantic Web demonstrates how to represent and connect structured data to reach a wider audience, encourage data reuse, and provide content that can be automatically processed with full certainty.
Abstract: A major limitation of conventional web sites is their unorganized and isolated contents, which is created mainly for human consumption. This limitation can be addressed by organizing and publishing data, using powerful formats that add structure and meaning to the content of web pages and link related data to one another. Computers can "understand" such data better, which can be useful for task automation. The web sites that provide semantics (meaning) to software agents form the Semantic Web, the Artificial Intelligence extension of the World Wide Web. In contrast to the conventional Web (the "Web of Documents"), the Semantic Web includes the "Web of Data", which connects "things" (representing real-world humans and objects) rather than documents meaningless to computers. Mastering Structured Data on the Semantic Web explains the practical aspects and the theory behind the Semantic Web and how structured data, such as HTML5 Microdata and JSON-LD, can be used to improve your sites performance on next-generation Search Engine Result Pages and be displayed on Google Knowledge Panels. You will learn how to represent arbitrary fields of human knowledge in a machine-interpretable form using the Resource Description Framework (RDF), the cornerstone of the Semantic Web. You will see how to store and manipulate RDF data in purpose-built graph databases such as triplestores and quadstores, that are exploited in Internet marketing, social media, and data mining, in the form of Big Data applications such as the Google Knowledge Graph, Wikidata, or Facebooks Social Graph. With the constantly increasing user expectations in web services and applications, Semantic Web standards gain more popularity. This book will familiarize you with the leading controlled vocabularies and ontologies and explain how to represent your own concepts. After learning the principles of Linked Data, the five-star deployment scheme, and the Open Data concept, you will be able to create and interlink five-star Linked Open Data, and merge your RDF graphs to the LOD Cloud. The book also covers the most important tools for generating, storing, extracting, and visualizing RDF data, including, but not limited to, Protg, TopBraid Composer, Sindice, Apache Marmotta, Callimachus, and Tabulator. You will learn to implement Apache Jena and Sesame in popular IDEs such as Eclipse and NetBeans, and use these APIs for rapid Semantic Web application development. Mastering Structured Data on the Semantic Web demonstrates how to represent and connect structured data to reach a wider audience, encourage data reuse, and provide content that can be automatically processed with full certainty. As a result, your web contents will be integral parts of the next revolution of the Web. What youll learn Extend your markup with machine-readable annotations and get your data to the Google Knowledge Graph Represent real-world objects and persons with machine-interpretable code Develop Semantic Web applications in JavaReuse and interlink structured data and create LOD datasets Who this book is for The book is intended for web developers and SEO experts who want to learn state-of-the-art Search Engine Optimization methods using machine-readable annotations and machine-interpretable Linked Data definitions. The book will also benefit researchers interested in automatic knowledge discovery. As a textbook on Semantic Web standards powered by graph theory and mathematical logic, the book could also be used as a reference work for computer science graduates and Semantic Web researchers.

Journal ArticleDOI
TL;DR: This paper exposes an approach to investigate accessible contents of educational websites to ensure and measure its compliance with accessibility standards for visually impaired people and investigates its applicability on educational institute websites.
Abstract: Web accessibility concerns of building websites that are accessible by all people regardless of their ability or disability. The W3C Web Accessibility Initiative (WAI) has been established to raise awareness of universal access. WAI develops guidelines which can help to ensure that Web pages are widely accessible. Assistive technology is used to increase, improve, and maintain capabilities of disabled persons to execute tasks that are sometimes difficult or impossible to do without technical aid. Also it helps them achieve their scholar, professional and social activities. This paper exposes an approach to investigate accessible contents of educational websites to ensure and measure its compliance with accessibility standards for visually impaired people. This study focuses on studying existing standards and investigating its applicability on educational institute websites. This will increase accessibility on e-learning materials that are provided by educational institutes. In this paper a sample of websites at selected universities in Jordan are evaluated in terms of accessibility in comparison to some universities websites in England and Arabic region. Results show that accessibility errors of universities websites in Jordan, and Arab region exceed the ones in UK by 13 times, and 5 times consequently.

Proceedings ArticleDOI
27 May 2015
TL;DR: The user-interaction with DataXFormer is presented and scenarios on how it can be used to transform data are shown and the effectiveness and efficiency of several approaches for transformation discovery are explored.
Abstract: While syntactic transformations require the application of a formula on the input values, such as unit conversion or date format conversions, semantic transformations, such as "zip code to city", require a look-up in some reference data. We recently presented DataXFormer, a system that leverages Web tables, Web forms, and expert sourcing to cover a wide range of transformations. In this demonstration, we present the user-interaction with DataXFormer and show scenarios on how it can be used to transform data and explore the effectiveness and efficiency of several approaches for transformation discovery, leveraging about 112 million tables and online sources.