scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 2016"


Journal ArticleDOI
TL;DR: This paper proposes a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them and discusses the framework compliance with REST guidelines and its major implementation choices.
Abstract: The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.

98 citations


Book
01 Jan 2016
TL;DR: This step-by-step book teaches you how to use web protocols to connect real-world devices to the web, including the Semantic and Social Webs, and you'll have the practical skills you need to implement your own web-connected products and services.
Abstract: Summary A hands-on guide that will teach how to design and implement scalable, flexible, and open IoT solutions using web technologies. This book focuses on providing the right balance of theory, code samples, and practical examples to enable you to successfully connect all sorts of devices to the web and to expose their services and data over REST APIs. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the Technology Because the Internet of Things is still new, there is no universal application protocol. Fortunately, the IoT can take advantage of the web, where IoT protocols connect applications thanks to universal and open APIs. About the Book Building the Web of Things is a guide to using cutting-edge web technologies to build the IoT. This step-by-step book teaches you how to use web protocols to connect real-world devices to the web, including the Semantic and Social Webs. Along the way you'll gain vital concepts as you follow instructions for making Web of Things devices. By the end, you'll have the practical skills you need to implement your own web-connected products and services. What's Inside Introduction to IoT protocols and devices Connect electronic actuators and sensors (GPIO) to a Raspberry Pi Implement standard REST and Pub/Sub APIs with Node.js on embedded systems Learn about IoT protocols like MQTT and CoAP and integrate them to the Web of Things Use the Semantic Web (JSON-LD, RDFa, etc.) to discover and find Web Things Share Things via Social Networks to create the Social Web of Things Build a web-based smart home with HTTP and Web Socket Compose physical mashups with EVRYTHNG, Node-RED, and IFTTT About the Reader For both seasoned programmers and those with only basic programming skills. About the Authors Dominique Guinard and Vlad Trifa pioneered the Web of Things and cofounded EVRYTHNG, a large-scale IoT cloud powering billions of Web Things.

91 citations


Journal ArticleDOI
TL;DR: As Web APIs become the backbone of Web, cloud, mobile, and machine learning applications, the services computing community will need to expand and embrace opportunities and challenges from these domains.
Abstract: As Web APIs become the backbone of Web, cloud, mobile, and machine learning applications, the services computing community will need to expand and embrace opportunities and challenges from these domains.

88 citations


Journal ArticleDOI
TL;DR: The research will assist anyone in the data and information management industry to identify opportunities and mitigate risk, and will assist data managers to identify future opportunities while considering negative impacts and understanding the underlying technologies associated with the structure and storage of electronic information.
Abstract: The purpose of this study is to define Web 3.0 and discuss the underlying technologies, identify new opportunities and highlight potential challenges that are associated with the evolution to Web 3.0 technologies.,A non-empirical study reviewing papers published in accredited research journals, articles and whitepapers and websites was conducted. To add scientific rigour to a literature review, a four-stage approach, as suggested by Sylvester et al. (2011), was used.,The World Wide Web (henceforth referred to as the Web) is recognised as the fastest growing publication medium of all time. To stay competitive, it is crucial to stay up to date with technological trends. The Web matures in its own unique way. From the static informative characteristics of Web 1.0, it progressed into the interactive experience Web 2.0 provides. The next phase of Web evolution, Web 3.0, is already in progress. Web 3.0 entails an integrated Web experience where the machine will be able to understand and catalogue data in a manner similar to humans. This will facilitate a world wide data warehouse where any format of data can be shared and understood by any device over any network. The evolution of the Web will bring forth new opportunities and challenges. Opportunities identified can mainly be characterised as the autonomous integration of data and services which increase the pre-existing capabilities of Web services, as well as the creation of new functionalities. The challenges mainly concern unauthorised access and manipulation of data, autonomous initiation of actions and the development of harmful scripts and languages.,The findings will assist data managers to identify future opportunities while considering negative impacts and understanding the underlying technologies associated with the structure and storage of electronic information. The research will assist anyone in the data and information management industry to identify opportunities and mitigate risk.,Many organisations were caught off guard by the evolution of the Web to Web 2.0. Organisations, and in particular anyone in the data and information management industry, need to be ready and acquire knowledge about the opportunities and challenges arising from Web 3.0 technologies.

86 citations


Journal ArticleDOI
TL;DR: Results of this study imply that educators typically have a narrow conception of Web 2.0 technologies and there is a wide array of Web2.0 tools and approaches yet to be fully harnessed by learning designers and educational researchers.
Abstract: This paper presents the methods and outcomes of a typological analysis of Web 2.0 technologies. A comprehensive review incorporating over 2000 links led to identification of over 200 Web 2.0 technologies that were suitable for learning and teaching purposes. The typological analysis involved development of relevant Web 2.0 dimensions, grouping cases according to observed regularities and construction of types based on meaningful relationships. Characterisation of the constructed types incorporated descriptions based on attributes, examples of representative instances and typical pedagogical use cases. The analysis resulted in a typology of 37 types of Web 2.0 technologies that were arranged into 14 clusters. Results of this study imply that educators typically have a narrow conception of Web 2.0 technologies and there is a wide array of Web 2.0 tools and approaches yet to be fully harnessed by learning designers and educational researchers. [ABSTRACT FROM AUTHOR]

80 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: The Semantic Web Stack for the Internet of Things is presented pointing out some of its shortcomings in the development of an IoT application or service.
Abstract: The reality of Internet of Things (IoT), with its growing number of devices and their diversity is challenging current approaches and technologies for a smarter integration of their data, applications and services. While the Web is seen as a convenient platform for integrating things, the Semantic Web can further improve its capacity to understand things' data and facilitate their interoperability. In this paper we present an overview of some of the Semantic Web technologies used in IoT systems, as well as some of the well accepted ontologies used to develop applications and services for the IoT. We finally present the Semantic Web Stack for the Internet of Things pointing out some of its shortcomings in the development of an IoT application or service.

79 citations


Journal ArticleDOI
TL;DR: The state of the art of semantic web service search is surveyed to enable such applications to perform a high-precision search and automated composition of services based on formal ontology-based representations of service semantics.
Abstract: Scalable means for the search of relevant web services are essential for the development of intelligent service-based applications in the future Internet. Key idea of semantic web services is to enable such applications to perform a high-precision search and automated composition of services based on formal ontology-based representations of service semantics. In this paper, we briefly survey the state of the art of semantic web service search.

70 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel web service recommendation approach incorporating a user's potential QoS preferences and diversity feature of user interests on web services, and presents an innovative diversity-aware web service ranking algorithm to rank theweb service candidates based on their scores, and diversity degrees derived from the web service graph.
Abstract: The last decade has witnessed a tremendous growth of web services as a major technology for sharing data, computing resources, and programs on the web. With the increasing adoption and presence of web services, design of novel approaches for effective web service recommendation to satisfy users’ potential requirements has become of paramount importance. Existing web service recommendation approaches mainly focus on predicting missing QoS values of web service candidates which are interesting to a user using collaborative filtering approach, content-based approach, or their hybrid. These recommendation approaches assume that recommended web services are independent to each other, which sometimes may not be true. As a result, many similar or redundant web services may exist in a recommendation list. In this paper, we propose a novel web service recommendation approach incorporating a user's potential QoS preferences and diversity feature of user interests on web services. User's interests and QoS preferences on web services are first mined by exploring the web service usage history. Then we compute scores of web service candidates by measuring their relevance with historical and potential user interests, and their QoS utility. We also construct a web service graph based on the functional similarity between web services. Finally, we present an innovative diversity-aware web service ranking algorithm to rank the web service candidates based on their scores, and diversity degrees derived from the web service graph. Extensive experiments are conducted based on a real world web service dataset, indicating that our proposed web service recommendation approach significantly improves the quality of the recommendation results compared with existing methods.

66 citations


Journal ArticleDOI
TL;DR: A general discussion of past and recent trends that may positively influence the direction of Web 2.0 is presented and criteria and future direction for Web 3.0 are presented.

63 citations


Journal ArticleDOI
TL;DR: Analysis of analyses show that personalizing the Web by self-reference and content relevance has a significant moderator role in influencing the relationship between determinants of intention to use and behavioral intention in certain cases.
Abstract: E-Commerce firms have adopted Web Personalization techniques extensively in the form of recommender systems for influencing user behavior for customer retention. Although there are numerous studies in this area, academic research addressing the role of Web Personalization in user acceptance of technology is very scant. Further, owing to the potential of recommender systems to attract and retain customers, most studies in web personalization have been done in E-Commerce setting. In this research, the `Consumer Acceptance and Use of Information Technology' theory proposed in previous research has been extended to include web personalization as a moderator and has been tested in an E-Government context. Data collection involved conducting a laboratory experiment with the treatment group receiving personalized web forms for requesting an E-Government service. Our analyses show that personalizing the Web by self-reference and content relevance has a significant moderator role in influencing the relationship between determinants of intention to use and behavioral intention in certain cases.

60 citations


Posted Content
TL;DR: The results show that, although there is a positive relationship between the Web 1.0 information transparency and the presence of the city councils on social media, and their intensity of use, the effect on transparency is basically ornamental, focused on general information.
Abstract: The purpose of this study is to provide a Web 2.0 Disclosure Index to measure the Web 2.0 presence of Spanish city councils and the information disclosed by them on these media, and to test whether the use of Web 2.0 tools and social media by local governments improve their Web 1.0 digital transparency. We have structured the Web 2.0 Index as the sum of three partial indexes, referred to presence, the content and the interactivity of the Web, and we have estimated these indexes by a content analysis of the city council's websites. We find that the use of Web 2.0 tools has an essentially ornamental focus, and thus it is necessary to increase the content disclosed, especially at the information level. The results also show that, although there is a positive relationship between the Web 1.0 information transparency and the presence of the city councils on social media, and their intensity of use, the effect on transparency is basically ornamental, focused on general information. We also find that those city councils that obtain better Web 1.0 scores also have higher scores in the Web 2.0 setting, but are more focused on promotional issues than on the disclosure of information about the entities' management.

Journal ArticleDOI
TL;DR: A system that employs Web intelligence to perform automatic adaptations on single elements composing a Web page and a reinforcement learning algorithm is utilized to manage user profiles to understand and predict users' behaviors and needs.

Proceedings ArticleDOI
22 May 2016
TL;DR: Verena is presented, a web application platform that provides end-to-end integrity guarantees against attackers that have full access to the web and database servers and can support real applications with modest overhead.
Abstract: Web applications rely on web servers to protect the integrity of sensitive information. However, an attacker gaining access to web servers can tamper with the data and query computation results, and thus serve corrupted web pages to the user. Violating the integrity of the web page can have serious consequences, affecting application functionality and decision-making processes. Worse yet, data integrity violation may affect physical safety, as in the case of medical web applications which enable physicians to assign treatment to patients based on diagnostic information stored at the web server. This paper presents Verena, a web application platform that provides end-to-end integrity guarantees against attackers that have full access to the web and database servers. In Verena, a client's browser can verify the integrity of a web page by verifying the results of queries on data stored at the server. Verena provides strong integrity properties such as freshness, completeness, and correctness for a common set of database queries, by relying on a small trusted computing base. In a setting where there can be many users with different write permissions, Verena allows a developer to specify an integrity policy for query results based on our notion of trust contexts, and then enforces this policy efficiently. We implemented and evaluated Verena on top of the Meteor framework. Our results show that Verena can support real applications with modest overhead.

Journal ArticleDOI
TL;DR: Seeking to make Web data "smarter" by utilizing a new kind of semantics, this paper proposes a new approach to semantics called " semantics-based reinforcement learning".
Abstract: From the very early days of the World Wide Web, researchers identified a need to be able to understand the semantics of the information on the Web in order to enable intelligent systems to do a better job of processing the booming Web of documents. Early proposals included labeling different kinds of links to differentiate, for example, pages describing people from those describing projects, events, and so on. By the late 90’s, this effort had led to a broad area of Computer Science research that became known as the Semantic Web [Berners-Lee et al. 2001]. In the past decade and a half, the early promise of enabling software agents on the Web to talk to one another in a meaningful way inspired advances in a multitude of areas: defining languages and standards to describe and query the semantics of resources on the Web, developing tractable and efficient ways to reason with these representations and to query them efficiently, understanding patterns in describing knowledge, and defining ontologies that describe Web data to allow greater interoperability.

Book ChapterDOI
17 Oct 2016
TL;DR: A refactored and cleaned SWDF dataset is provided, a novel data model is used which improves theSemantic Web Conference Ontology, adopting best ontology design practices and an open source workflow is provided to support a healthy growth of the dataset beyond the Semantic Web conferences.
Abstract: The Semantic Web Dog Food (SWDF) is the reference linked dataset of the Semantic Web community about papers, people, organisations, and events related to its academic conferences. In this paper we analyse the existing problems of generating, representing and maintaining Linked Data for the SWDF. With this work (i) we provide a refactored and cleaned SWDF dataset; (ii) we use a novel data model which improves the Semantic Web Conference Ontology, adopting best ontology design practices and (iii) we provide an open source workflow to support a healthy growth of the dataset beyond the Semantic Web conferences.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: This paper mines both developer forums and Stack Overflow to find the common challenges encountered by client developers and highlights a list of dominant concerns and persistent concerns for each Web API that Web API providers should pay more attention to.
Abstract: Popularity of service-oriented computing makes more and more companies and organizations provide their services through Web Application Program Interfaces (Web APIs). The Web APIs are considered to offer a convenient way to integrate web services to client applications. However, the integration process is often challenging. For example, updated Web APIs may be no longer compatible with the current version of client applications, thus break the client applications. To help the integration process, it is of significant interest to understand the challenges that are encountered by client developers. Developer forums and Stack Overflow are commonly used by client developers to seek help from fellow peers. In this paper, we mine both developer forums and Stack Overflow to find the common challenges encountered by client developers. We perform an empirical study on 32 Web APIs with a total of 92,471 discussions. To extract topics from all discussions, we apply a topic modeling technique called Latent Dirichlet Allocation (LDA). The results show that on average five dominant topics can cover at least 50% of questions regarding each Web API. We further investigate how topics evolve across Web APIs, and find five patterns. As a summary, our findings highlight a list of dominant concerns and persistent concerns for each Web API that Web API providers should pay more attention to.

Proceedings ArticleDOI
04 Apr 2016
TL;DR: An approach to generate Web APIs out of models, thus paving the way for managing models and collaborating on them online and relying on well-known libraries and standards, thus facilitating its comprehension and maintainability.
Abstract: In the last years, there has been an increasing interest for Model-Driven Engineering (MDE) solutions in the Web. Web-based modeling solutions can leverage on better support for distributed management (i.e., the Cloud) and collaboration. However, current modeling environments and frameworks are usually restricted to desktop-based scenarios and therefore their capabilities to move to the Web are still very limited. In this paper we present an approach to generate Web APIs out of models, thus paving the way for managing models and collaborating on them online. The approach, called EMF-REST, takes Eclipse Modeling Framework (EMF) data models as input and generates Web APIs following the REST principles and relying on well-known libraries and standards, thus facilitating its comprehension and maintainability. Also, EMF-REST integrates model and Web-specific features to provide model validation and security capabilities, respectively, to the generated API.

Journal ArticleDOI
TL;DR: An objective test is performed on the official web pages of the Italian province and region chief towns to check their compliance to the 22 technical requirements defined by the Stanca Act.
Abstract: Accessibility of the Italian public administration web pages is ruled by the Stanca Act and in particular the Decree of the Minister issued on July 8, 2005. In this paper, an objective test is performed on the official web pages of the Italian province and region chief towns to check their compliance to the 22 technical requirements defined by the Stanca Act. A sample of 976 web pages belonging to the websites of the Italian chief towns have been downloaded in the period October---December 2012. Such a data collection has been submitted to Achecker, the worldwide recognized syntax and accessibility validation service. Several accessibility and syntax errors have been found following the automatic analysis. Such errors have been classified, a statistic has been produced, and some graphs are included to offer an immediate view of the error distribution. Moreover, the most frequent errors are pointed out and explained in detail. Although the Stanca Act has been promulgated some years ago, and contains precise indications about updating a web page to be compliant to the 22 technical requirements, all the analyzed websites are not fully compliant to the law. Updating web pages to be compliant to the Stanca Act is a slow process and some grave errors are still present, both in terms of syntax and accessibility.

Journal ArticleDOI
TL;DR: An ontology-based approach to collect, integrate and store web analytics data, from many sources of popular and commercial digital footprints, is proposed and enriched and semantically annotated data is obtained to properly train an intelligent system, involving data mining procedures, for the analysis of customer behavior in real e-commerce sites.
Abstract: A semantic approach to represent and consolidate web analytic data is proposed.An OWL ontology for web analytics in e-commerce is designed and proposed.The proposed approach is validated with tracking data of 15 real-world e-shops.Obtained semantized data successfully train advanced data mining algorithms.We provide actual e-shops with tools to enhance their commercial activities. Web analytics has emerged as one of the most important activities in e-commerce, since it allows companies and e-merchants to track the behavior of customers when visiting their web sites. There exist a series of tools for web analytics that are used not only for tracking and measuring web traffic, but also for analyzing the commercial activity. However, most of these tools focus on low level web attributes and metrics, making other sophisticated functionalities and analyses only available for commercial (non-free) versions.In this context, the SME-Ecompass European initiative aims at providing e-commerce SMEs with accessible tools for high level web analytics. These software facilities should use different sources of data coming from digital footprints allocated in e-shops, to fuse them together in a coherent way, and to make them available for advanced data mining procedures. This motivated us to propose in this work an ontology-based approach to collect, integrate and store web analytics data, from many sources of popular and commercial digital footprints. As article's main impact, we obtain enriched and semantically annotated data that is used to properly train an intelligent system, involving data mining procedures, for the analysis of customer behavior in real e-commerce sites. In concrete, for the validation of our semantic approach, we have captured and integrated data from Google Analytics and Piwik digital footprints allocated in 15 e-shops of different commercial sectors and countries (UK, Spain, Greece and Germany), throughout several months of activity. The obtained results show different perspectives in customer's behavior analysis that go one step beyond the most popular web analytics tools in the current market.

Journal ArticleDOI
01 Jun 2016
TL;DR: In this article, the authors present a survey on the communication aspect of interaction by reviewing languages, protocols, and architectures that drive today's standards and software implementations applicable in cloud computing.
Abstract: The evolution of Web and service technologies has led to a wide landscape of standards and protocols for interaction between loosely coupled software components. Examples range from Web applications, mashups, apps, and mobile devices to enterprise-grade services. Cloud computing is the industrialization of service provision and delivery, where Web and enterprise services are converging on a technological level. The article discusses this technological landscape and, in particular, current trends with respect to cloud computing. The survey focuses on the communication aspect of interaction by reviewing languages, protocols, and architectures that drive today's standards and software implementations applicable in clouds. Technological advances will affect both client side and service side. There is a trend toward multiplexing, multihoming, and encryption in upcoming transport mechanisms, especially for architectures, where a client simultaneously sends a large number of requests to some service. Furthermore, there are emerging client-to-client communication capabilities in Web clients that could establish a foundation for upcoming Web-based messaging architectures.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: By performing the evaluation of university websites, it is identified that there are major barriers to a large number of users and universities with high academic prestige do not show a greater level of web accessibility.
Abstract: This article describes a study to assess the accessibility of the contents concerning the websites of 20 universities from all around the world. The accessibility assessment was carried out to verify compliance with the Web Content Accessibility Guidelines 2.0 (WCAG 2.0) published by the World Wide Web Consortium (W3C). The main goal of this study is to determine if even people with disabilities can access and use websites of the universities with higher academic prestige. Besides, in our approach we also make use of the Website Accessibility Conformance Evaluation Methodology (WCAG-EM) is an approach for determining how well a website conforms to WCAG 2.0. The WCAG-EM provides a guidance on using the methodology and considerations for specific situations. From the results, we can conclude that the majority of the tested websites do not achieve an acceptable level of compliance. Universities with high academic prestige do not show a greater level of web accessibility. By performing the evaluation of university websites, we have identified that there are major barriers to a large number of users.

Journal ArticleDOI
TL;DR: The present scenario of web accessibility compliance in the countries across the globe is studied, with special focus on India, and suggestions on improvement of the web page design of the educational institutions are given.
Abstract: Summary Earlier preparing accessible websites was not as important as it is today. Due to the increase in the accessibility issues and compliance of accessibility norms, there is an increasing research on the accessibility. In this paper we have tried to analyze the accessibility of the top educational institutions of different countries. Our main contributions of this paper are (1) Studying the present scenario of web accessibility compliance in the countries across the globe, with special focus on India. (2) Analyzing selected websites of top universities and educational institutions of different countries. (3) Based on the analysis, results have been generated to indicate the web accessibility of the websites. (4) Suggestions on improvement of the web page design of the educational institutions have been given thereafter.

Book ChapterDOI
29 May 2016
TL;DR: This paper presents an adaptive component-based approach together with its open source implementation for creating flexible and reusable SW interfaces driven by Linked Data.
Abstract: Due to the increasing amount of Linked Data openly published on the Web, user-facing Linked Data Applications LDAs are gaining momentum. One of the major entrance barriers for Web developers to contribute to this wave of LDAs is the required knowledge of Semantic Web SW technologies such as the RDF data model and SPARQL query language. This paper presents an adaptive component-based approach together with its open source implementation for creating flexible and reusable SW interfaces driven by Linked Data. Linked Data-driven LD-R Web components abstract the complexity of the underlying SW technologies in order to allow reuse of existing Web components in LDAs, enabling Web developers who are not experts in SW to develop interfaces that view, edit and browse Linked Data. In addition to the modularity provided by the LD-R components, the proposed RDF-based configuration method allows application assemblers to reshape their user interface for different use cases, by either reusing existing shared configurations or by creating their proprietary configurations.

01 Apr 2016
TL;DR: Nonfiction texts: Arndt, I. 2014.
Abstract: Nonfiction texts: Arndt, I. 2014. Best Foot Forward: Exploring feet, flippers, and claws. New York: Holiday House. Belleranti, G. 2013. The big-eared, bushy-tailed fennec fox. http://www.superteacherworksheets.com/ Craighead George, J., & Washburn, L. 1998. Look to the North: A wolf pup diary. New York: HarperCollins. Dutcher, J., & Dutcher, J. YR. The Hidden Lives of Wolves Jenkins, S. 2007. Living Color. Boston: Houghton Mifflin. Jenkins, S. YR. What Do You Do When Something Wants to Eat You? Boston: Houghton Mifflin. Jenkins, S., & Page, R. 2005. I See a Kookaburra! Boston: Houghton Mifflin. Jenkins, S., & Page, R. 2003. What Do You Do With a Tail Like This? Boston: Houghton Mifflin.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: This paper presents the current problem of developing the new methods of development scenarios determination of the educational web forum and the analysis of the conceptual web forum framework.
Abstract: This paper presents the current problem of developing the new methods of development scenarios determination of the educational web forum. The analysis of the architecture of web forums based forums and the analysis of the conceptual web forum framework are conducted. Conceptual web forum scheme is designed. The analysis of web forums content management system is realized. The analysis of educational web communities' scenarios is carried out. Development scenarios of the educational web forum are determined. Efficiency dynamic of educational web forum are researched in this paper.

Proceedings Article
16 Mar 2016
TL;DR: The Web Scraper's designing principles and methods are contrasted, it tells how a working Scraper is designed and how it is most suitable for Scraping data from Websites.
Abstract: This paper talks about the World of Web Scraper, Web scraping is related to web indexing, whose task is to index information on the web with the help of a bot or web crawler. Here the legal aspect, both positive and negative sides are taken into view. Some cases regarding the legal issues are also taken into account. The Web Scraper's designing principles and methods are contrasted, it tells how a working Scraper is designed. The implementation is divided into three parts: the Web Crawler to fetch the desired links, the data extractor to fetch the data from the links and storing that data into a csv file. The Python language is used for the implementation. On combining all these with the good knowledge of libraries and working experience, we can have a fully-fledged Scraper. Due to a vast community and library support for Python and the beauty of coding style of python language, it is most suitable for Scraping data from Websites.

Journal ArticleDOI
TL;DR: This column discusses the benefits of using Semantic Web technologies to get meaningful knowledge from sensor data.
Abstract: This column discusses the benefits of using Semantic Web technologies to get meaningful knowledge from sensor data.

Journal ArticleDOI
TL;DR: A platform is created, based on a novel approach, which allows a set of accessibility problems to be solved without modifying the original page code and a guided assistant is used to offer adequate solutions to each detected problem.

Journal ArticleDOI
TL;DR: Web 2.0 can have a profound impact on undergraduate students and lecturers in teaching and learning and there is a need for more training to increase awareness of and familiarity with new Web 2.
Abstract: Background: Over the years, advancements in Internet technologies have led to the emergence of new technologies such as Web 2.0, which have taken various sectors including higher education by storm. Web 2.0 technologies are slowly but surely penetrating higher education in developing countries with much hype, according to the literature. This justifies the need for original research that aims at demystifying the application and exploiting the promises that come along with these so-called versatile technologies. Objectives: The specific objectives of the study were to ascertain students’ awareness of and familiarity with Web 2.0 technologies, to determine the purposes for which students use Web 2.0 technologies, and to identify the factors that affect students’ use or non-use of Web 2.0 technologies. Method: A mixed-methods approach was adopted. Firstly, a questionnaire was sent to 186 students; secondly, the curricula of the two departments in the Faculty of Information Science and Communication (ISC) were analysed; finally, follow-up interviews were conducted with seven lecturers in the Faculty of ISC. Results: The study found that students use Web 2.0 technologies to search for information, to communicate with lecturers, to submit assignments and to communicate with friends on academic work. Wikipedia, WhatsApp, Google Apps and YouTube are the Web 2.0 technologies most used by students. Poor bandwidth (Internet connection) coupled with the absence of Wi-Fi (wireless Internet connection) prevents the successful adoption of Web 2.0 by students. Conclusion: Web 2.0 can have a profound impact on undergraduate students and lecturers in teaching and learning. The research results indicated a high awareness of a wide range of Web 2.0 technologies, with social networks being the commonly used one. There is a need for more training to increase awareness of and familiarity with new Web 2.0 technologies. The problem of poor bandwidth needs to be addressed by the university management in order to gain significant benefits.

Journal ArticleDOI
TL;DR: This paper includes the web attacks analysis from Website Hacking Incident Database (WHID) and other information security and news websites which is a guide to developers to take respective appropriate preventive measures in future.