scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 2017"


Proceedings ArticleDOI
14 Jun 2017
TL;DR: The motivation, design and formal semantics of WebAssembly are described, some preliminary experience with implementations are provided, and it is described how WebAssembly is an abstraction over modern hardware, making it language-, hardware-, and platform-independent, with use cases beyond just the Web.
Abstract: The maturation of the Web platform has given rise to sophisticated and demanding Web applications such as interactive 3D visualization, audio and video software, and games. With that, efficiency and security of code on the Web has become more important than ever. Yet JavaScript as the only built-in language of the Web is not well-equipped to meet these requirements, especially as a compilation target. Engineers from the four major browser vendors have risen to the challenge and collaboratively designed a portable low-level bytecode called WebAssembly. It offers compact representation, efficient validation and compilation, and safe low to no-overhead execution. Rather than committing to a specific programming model, WebAssembly is an abstraction over modern hardware, making it language-, hardware-, and platform-independent, with use cases beyond just the Web. WebAssembly has been designed with a formal semantics from the start. We describe the motivation, design and formal semantics of WebAssembly and provide some preliminary experience with implementations.

388 citations


Journal ArticleDOI
TL;DR: The results of this study confirms that new meta-heuristic algorithms have not yet been applied for solving QoS-aware web services composition and describes future research directions in this area.
Abstract: Web service composition concerns the building of new value added services by integrating the sets of existing web services. Due to the seamless proliferation of web services, it becomes difficult to find a suitable web service that satisfies the requirements of users during web service composition. This paper systematically reviews existing research on QoS-aware web service composition using computational intelligence techniques (published between 2005 and 2015). This paper develops a classification of research approaches on computational intelligence based QoS-aware web service composition and describes future research directions in this area. In particular, the results of this study confirms that new meta-heuristic algorithms have not yet been applied for solving QoS-aware web services composition.

168 citations


Proceedings ArticleDOI
06 Nov 2017
TL;DR: This work proposes the first end-to-end framework to build an NL2API for a given web API, and applies it to real-world APIs, and shows that it can collect high-quality training data at a low cost, and build NL2APIs with good performance from scratch.
Abstract: As the Web evolves towards a service-oriented architecture, application program interfaces (APIs) are becoming an increasingly important way to provide access to data, services, and devices. We study the problem of natural language interface to APIs (NL2APIs), with a focus on web APIs for web services. Such NL2APIs have many potential benefits, for example, facilitating the integration of web services into virtual assistants. We propose the first end-to-end framework to build an NL2API for a given web API. A key challenge is to collect training data, i.e., NL command-API call pairs, from which an NL2API can learn the semantic mapping from ambiguous, informal NL commands to formal API calls. We propose a novel approach to collect training data for NL2API via crowdsourcing, where crowd workers are employed to generate diversified NL commands. We optimize the crowdsourcing process to further reduce the cost. More specifically, we propose a novel hierarchical probabilistic model for the crowdsourcing process, which guides us to allocate budget to those API calls that have a high value for training NL2APIs. We apply our framework to real-world APIs, and show that it can collect high-quality training data at a low cost, and build NL2APIs with good performance from scratch. We also show that our modeling of the crowdsourcing process can improve its effectiveness, such that the training data collected via our approach leads to better performance of NL2APIs than a strong baseline.

66 citations


Journal ArticleDOI
TL;DR: Recommendations are made that allow Web developers to choose a framework for creating a real-world Web project using various criteria to determine performance and effectiveness of frameworks during the same task.

61 citations


Journal ArticleDOI
TL;DR: This paper reviews the developments of web mapping from the first static online map images to the current highly interactive, multi-sourced web mapping services that have been increasingly moved to cloud computing platforms.
Abstract: Web mapping and the use of geospatial information online have evolved rapidly over the past few decades. Almost everyone in the world uses mapping information, whether or not one realizes it. Almost every mobile phone now has location services and every event and object on the earth has a location. The use of this geospatial location data has expanded rapidly, thanks to the development of the Internet. Huge volumes of geospatial data are available and daily being captured online, and are used in web applications and maps for viewing, analysis, modeling and simulation. This paper reviews the developments of web mapping from the first static online map images to the current highly interactive, multi-sourced web mapping services that have been increasingly moved to cloud computing platforms. The whole environment of web mapping captures the integration and interaction between three components found online, namely, geospatial information, people and functionality. In this paper, the trends and interactions among these components are identified and reviewed in relation to the technology developments. The review then concludes by exploring some of the opportunities and directions.

60 citations


Journal ArticleDOI
TL;DR: The results show that during the last years there was a significant growth in initiatives and countries hosting these initiatives, volume of data and number of contents preserved, which indicates that the web archiving community is dedicating a growing effort on preserving digital information.
Abstract: Web archives preserve information published on the web or digitized from printed publications. Much of this information is unique and historically valuable. However, the lack of knowledge about the global status of web archiving initiatives hamper their improvement and collaboration. To overcome this problem, we conducted two surveys, in 2010 and 2014, which provide a comprehensive characterization on web archiving initiatives and their evolution. We identified several patterns and trends that highlight challenges and opportunities. We discuss these patterns and trends that enable to define strategies, estimate resources and provide guidelines for research and development of better technology. Our results show that during the last years there was a significant growth in initiatives and countries hosting these initiatives, volume of data and number of contents preserved. While this indicates that the web archiving community is dedicating a growing effort on preserving digital information, other results presented throughout the paper raise concerns such as the small amount of archived data in comparison with the amount of data that is being published online.

56 citations


Journal ArticleDOI
TL;DR: A new QoS-aware Web service recommendation system, which considers the contextual feature similarities of different services based on their feature similarities, and utilizes an improved matrix factorization method to recommend services to users.
Abstract: Quality of service (QoS) has been playing an increasingly important role in today’s Web service environment. Many techniques have been proposed to recommend personalized Web services to customers. However, existing methods only utilize the QoS information at the client-side and neglect the contextual characteristics of the service. Based on the fact that the quality of Web service is affected by its context feature, this paper proposes a new QoS-aware Web service recommendation system, which considers the contextual feature similarities of different services. The proposed system first extracts the contextual properties from WSDL files to cluster Web services based on their feature similarities, and then utilizes an improved matrix factorization method to recommend services to users. The proposed framework is validated on a real-world dataset consisting of over 1.5 million Web service invocation records from 5825 Web services and 339 users. The experimental results prove the efficiency and accuracy of the proposed method.

45 citations


Journal ArticleDOI
TL;DR: A data set of implicit feedback on real-world Web services, which consist of more than 280,000 user-service interaction records, 65,000 service users and 15,000 Web services or mashups, is reported and a time-aware service recommendation approach is proposed.
Abstract: An increasing number of Web services have been published on the Internet over the past decade due to the rapid development and adoption of the SOA (Services Oriented Architecture) standard However, in the current state of the Web, recommending suitable Web services to users becomes a challenge due to the huge divergence in published content Existing Web services recommendation approaches based on collaborative filtering are mainly aiming to QoS (Quality of Service) prediction Recommending services based on users' ratings on services are seldomly reported due to the difficulty of collecting such explicit feedback In this paper, we report a data set of implicit feedback on real-world Web services, which consist of more than 280,000 user-service interaction records, 65,000 service users and 15,000 Web services or mashups Temporal information is becoming an increasingly important factor in service recommendation since time effects may influence users' preferences on services to a large extent Based on the collected data set, we propose a time-aware service recommendation approach Temporal information is sufficiently considered in our approach, where three time effects are analyzed and modeled including user bias shifting, Web service bias shifting, and user preference shifting Experimental results show that the proposed approach outperforms seven existing collaborative filtering approaches on the prediction accuracy

43 citations


Journal ArticleDOI
TL;DR: Very few ministry Web sites of the four countries achieved AA conformance level on accessibility, many failed to pass conformancelevel A and AA checkpoints for accessibility errors, suggesting that the countries in this study need to put more emphasis on designing government Web sites to be more accessible.
Abstract: Government Web sites aim to provide information to the citizens of the country; therefore, they should be accessible, easy to use and visible via search engines. Based on this assumption, in this paper, the ministry Web sites of four countries namely the Kyrgyz Republic, the Republic of Azerbaijan, the Republic of Kazakhstan and the Republic of Turkey were analyzed in terms of accessibility and quality in use. Tests were carried out utilizing online automated tools. Results indicate that the usage rate of Information and Communication Technologies by the government is higher in Turkey, which affects the visibility of government Web sites but not their quality in use. Very few ministry Web sites of the four countries achieved AA conformance level on accessibility, many failed to pass conformance level A and AA checkpoints for accessibility errors. In order to ensure equal access to all their citizens, the countries in this study need to put more emphasis on designing government Web sites to be more accessible.

42 citations


Journal ArticleDOI
TL;DR: A new full scope Web accessibility and usability evaluation procedure was needed and is now presented and aims at creating a basis for both organisations and Web developers to understand how to perform an adequate assessment of their websites.

40 citations


Journal ArticleDOI
TL;DR: The results obtained and the results of an independent t test indicate that most of the issues of all Web sites tested are not of a technical nature, and occur mainly due to human factors related to Web application development.
Abstract: Today the Internet is the easiest way to find information about any kind of organization, and the first impression about an organization is almost always based on its Web site. This study investigated whether the Web sites of the universities in the Kyrgyz Republic comply with prevailing standards of accessibility and usability and whether these qualities depend on location and type of ownership of the universities. The analysis was conducted using online evaluation tools. Based on the data collected, the hypotheses were further tested using the SPSS statistical package. The results show a low usability rating for the vast majority of the universities’ Web sites. For 90.47 % of the Web sites upload time exceeds 30 s; 52.38 % of the Web sites have broken links; and 100 % have browser compatibility problems. The results of accessibility tests show low compliance with W3C-WCAG 1.0: error rates for Priority 1, 2, and 3 checkpoints of 83.33, 92.85, and 95.24 %, respectively. The results obtained and the results of an independent t test indicate that most of the issues of all Web sites tested are not of a technical nature, and occur mainly due to human factors related to Web application development.

Journal ArticleDOI
TL;DR: A linear-temporal logic model checking approach for the analysis of structured e-commerce Web logs is proposed that can be easily converted into event logs where the behavior of users is captured.
Abstract: Online shopping is becoming more and more common in our daily lives Understanding users’ interests and behavior is essential to adapt e-commerce websites to customers’ requirements The information about users’ behavior is stored in the Web server logs The analysis of such information has focused on applying data mining techniques, where a rather static characterization is used to model users’ behavior, and the sequence of the actions performed by them is not usually considered Therefore, incorporating a view of the process followed by users during a session can be of great interest to identify more complex behavioral patterns To address this issue, this paper proposes a linear-temporal logic model checking approach for the analysis of structured e-commerce Web logs By defining a common way of mapping log records according to the e-commerce structure, Web logs can be easily converted into event logs where the behavior of users is captured Then, different predefined queries can be performed to identify different behavioral patterns that consider the different actions performed by a user during a session Finally, the usefulness of the proposed approach has been studied by applying it to a real case study of a Spanish e-commerce website The results have identified interesting findings that have made possible to propose some improvements in the website design with the aim of increasing its efficiency

Journal ArticleDOI
TL;DR: This article surveys the most common attacks against web sessions, that is, attacks that target honest web browser users establishing an authenticated session with a trusted web application, and identifies five guidelines that have been taken into account by the designers of the different proposals.
Abstract: In this article, we survey the most common attacks against web sessions, that is, attacks that target honest web browser users establishing an authenticated session with a trusted web application. We then review existing security solutions that prevent or mitigate the different attacks by evaluating them along four different axes: protection, usability, compatibility, and ease of deployment. We also assess several defensive solutions that aim at providing robust safeguards against multiple attacks. Based on this survey, we identify five guidelines that, to different extents, have been taken into account by the designers of the different proposals we reviewed. We believe that these guidelines can be helpful for the development of innovative solutions approaching web security in a more systematic and comprehensive way.

Journal ArticleDOI
TL;DR: This adoption model aims to better help local governments in the identification of factors influencing the actual adoption and implementation of web accessibility standards in their situation, and explains how factors in the different categories contribute to the adoption and Implementation ofweb accessibility standards.
Abstract: Local government organizations such as municipalities often seem unable to fully adopt or implement web accessibility standards even if they are actively pursuing it. Based on existing adoption models, this study identifies factors in five categories that influence the adoption and implementation of accessibility standards for local government websites. Awareness of these factors is important for stakeholders adopting and implementing web accessibility standards. To further develop and understand these factors, this study has identified and interviewed experts in the field of (organizational) accessibility. This has led to an extension of the existing models. The extended model was then validated by interviews with key stakeholders. The outcome of this study places existing adoption models in a new context. The result is an adoption model that contributes better to explaining adoption and implementation processes within eGovernment systems and organizations. This adoption model aims to better help local governments in the identification of factors influencing the actual adoption and implementation of web accessibility standards in their situation. The model explains how factors in the different categories contribute to the adoption and implementation of web accessibility standards. The model may also be applicable to the adoption and implementation of other guidelines and (open) standards within local government.

Book
22 Feb 2017
TL;DR: Managing the Web of Things: Linking the Real World to the Web presents a consolidated and holistic coverage of engineering, management, and analytics of the Internet of Things, ranging from modeling, searching, and data analytics, to software building, applications, and social impact.
Abstract: Managing the Web of Things: Linking the Real World to the Web presents a consolidated and holistic coverage of engineering, management, and analytics of the Internet of Things. The web has gone through many transformations, from traditional linking and sharing of computers and documents (i.e., Web of Data), to the current connection of people (i.e., Web of People), and to the emerging connection of billions of physical objects (i.e., Web of Things). With increasing numbers of electronic devices and systems providing different services to people, Web of Things applications present numerous challenges to research institutions, companies, governments, international organizations, and others. This book compiles the newest developments and advances in the area of the Web of Things, ranging from modeling, searching, and data analytics, to software building, applications, and social impact. Its coverage will enable effective exploration, understanding, assessment, comparison, and the selection of WoT models, languages, techniques, platforms, and tools. Readers will gain an up-to-date understanding of the Web of Things systems that accelerates their research. Offers a comprehensive and systematic presentation of the methodologies, technologies, and applications that enable efficient and effective management of the Internet of Things Provides an in-depth analysis on the state-of-the-art Web of Things modeling and searching technologies, including how to collect, clean, and analyze data generated by the Web of Things Covers system design and software building principles, with discussions and explorations of social impact for the Web of Things through real-world applications Acts as an ideal reference or recommended text for graduate courses in cloud computing, service computing, and more

Journal ArticleDOI
TL;DR: A novel Web service discovery approach based on topic models is presented that can maintain the performance of service discovery at an elevated level by greatly decreasing the number of candidate Web services, thus leading to faster response time.
Abstract: Web services have attracted much attention from distributed application designers and developers because of their roles in abstraction and interoperability among heterogeneous software systems, and a growing number of distributed software applications have been published as Web services on the Internet. Faced with the increasing numbers of Web services and service users, researchers in the services computing field have attempted to address a challenging issue, i.e., how to quickly find the suitable ones according to user queries. Many previous studies have been reported towards this direction. In this paper, a novel Web service discovery approach based on topic models is presented. The proposed approach mines common topic groups from the service-topic distribution matrix generated by topic modeling, and the extracted common topic groups can then be leveraged to match user queries to relevant Web services, so as to make a better trade-off between the accuracy of service discovery and the number of candidate Web services. Experiment results conducted on two publicly-available data sets demonstrate that, compared with several widely used approaches, the proposed approach can maintain the performance of service discovery at an elevated level by greatly decreasing the number of candidate Web services, thus leading to faster response time.

Journal ArticleDOI
TL;DR: It is demonstrated that in general accessibility levels have actually decreased slightly, with each of the university Web sites reviewed contains at least one of a variety of components that makes it inaccessible to some users.
Abstract: In 2010, the author of this paper conducted an evaluation of the accessibility level of the home pages of Turkish Universities (Kurt in Univers Access Inf Soc 10(1):101---110, 2011). That investigation, which utilized a variety of different evaluative techniques, as recommended by the World Wide Web Consortium, found that none of the reviewed home pages met the minimum criteria for Web accessibility. In 2015, the author completed a follow-up audit of the same universities' home pages, using a similar methodological approach. The goal of the audit was to determine whether Web site accessibility had increased or improved during the intervening 5-year period. This paper, which details the results of the second study, demonstrates that in general accessibility levels have actually decreased slightly. Each of the university Web sites reviewed contains at least one of a variety of components that makes it inaccessible to some users. Of these, the most prominent is neglecting to provide equivalent text alternative for content that has been presented in non-text formats, although doing so would be a relatively simple matter.

Journal ArticleDOI
TL;DR: Some of the state-of-the-art web technologies, third-party libraries, and frameworks for quick interactive web development, and a simple interactive browser-based, mobile friendly web application which was developed using one of the latest web development framework are reviewed.

Proceedings ArticleDOI
25 Feb 2017
TL;DR: This work conducted semi-structured interviews with archivists and technologists and identified thematic areas that inform the appraisal process in web archives, some of which are encoded in heuristics and algorithms.
Abstract: The field of web archiving provides a unique mix of human and automated agents collaborating to achieve the preservation of the web. Centuries old theories of archival appraisal are being transplanted into the sociotechnical environment of the World Wide Web with varying degrees of success. The work of the archivist and bots in contact with the material of the web present a distinctive and understudied CSCW shaped problem. To investigate this space we conducted semi-structured interviews with archivists and technologists who were directly involved in the selection of content from the web for archives. These semi-structured interviews identified thematic areas that inform the appraisal process in web archives, some of which are encoded in heuristics and algorithms. Making the infrastructure of web archives legible to the archivist, the automated agents and the future researcher is presented as a challenge to the CSCW and archival community.

Proceedings ArticleDOI
02 Nov 2017
TL;DR: The ease with which the library can be integrated in an already existing web application is presented, some of the visualization perspectives that the library provides are discussed and some future challenges for similar libraries are pointed to.
Abstract: Tens of thousands of web applications are written in Flask, a Python-based web framework. Despite a rich ecosystem of extensions, there is none that supports the developer in gaining insight into the evolving performance of their service. In this paper, we introduce Flask Dashboard, a library that addresses this problem. We present the ease with which the library can be integrated in an already existing web application, discuss some of the visualization perspectives that the library provides and point to some future challenges for similar libraries.

Book ChapterDOI
01 Jan 2017
TL;DR: This chapter examines how an open-source, modular, multimodal dialog system—HALEF—can be seamlessly assembled, much like a jigsaw puzzle, by putting together multiple distributed components that are compliant with the W3C recommendations or other open industry standards.
Abstract: As dialog systems become increasingly multimodal and distributed in nature with advances in technology and computing power, they become that much more complicated to design and implement. However, open industry and W3C standards provide a silver lining here, allowing the distributed design of different components that are nonetheless compliant with each other. In this chapter we examine how an open-source, modular, multimodal dialog system—HALEF—can be seamlessly assembled, much like a jigsaw puzzle, by putting together multiple distributed components that are compliant with the W3C recommendations or other open industry standards. We highlight the specific standards that HALEF currently uses along with a perspective on other useful standards that could be included in the future. HALEF has an open codebase to encourage progressive community contribution and a common standard testbed for multimodal dialog system development and benchmarking.

Journal ArticleDOI
TL;DR: This paper proposes a novel Web services discovery approach, which can mine the underlying semantic structures of interaction interface parameters to help users find and employ Web services, and can match interfaces with high precision when the parameters of those interfaces contain meaningful synonyms, abbreviations, and combinations of disordered fragments.
Abstract: In recent years, Web service discovery has been a hot research topic. In this paper, we propose a novel Web services discovery approach, which can mine the underlying semantic structures of interaction interface parameters to help users find and employ Web services, and can match interfaces with high precision when the parameters of those interfaces contain meaningful synonyms, abbreviations, and combinations of disordered fragments. Our approach is based on mining the underlying semantics. First, we propose a conceptual Web services description model in which we include the type path for the interaction interface parameters in addition to the traditional text description. Then, based on this description model, we mine the underlying semantics of the interaction interface to create index libraries by clustering interaction interface names and fragments under the supervision of co-occurrence probability. This index library can help provide a high-efficiency interface that can match not only synonyms but also abbreviations and fragment combinations. Finally, we propose a Web service Operations Discovery algorithm (OpD). The OpD discovery results include two types of Web services: services with “Single” operations and services with “Composite” operations. The experimental evaluation shows that our approach performs better than other Web service discovery methods in terms of both discovery time and precision/recall rate.

Journal ArticleDOI
TL;DR: It is argued that social technologies are valuable tools in the language classrooms but entail challenges regarding their theoretical and pedagogical alignment as well as reported advantages and challenges in harnessing Web 2.0 tools.
Abstract: This study explores the research development pertaining to the use of Web 2.0 technologies in the field of Computer-Assisted Language Learning (CALL). Published research manuscripts related to the use of Web 2.0 tools in CALL have been explored, and the following research foci have been determined: (1) Web 2.0 tools that dominate second/foreign language classroom; (2) learning/Second Language Acquisition theories that guide their use; (3) skills that Web 2.0 technologies support; (4) reported advantages and challenges in harnessing Web 2.0 tools; and (5) task design considerations. Findings of this study delineate how Web 2.0 tools are utilized in CALL and capitalize Web 2.0 features employed for different types of pedagogical activities. This paper argues that social technologies are valuable tools in the language classrooms but entail challenges regarding their theoretical and pedagogical alignment. The study concludes with some discussion and implications for instructional designers and practit...

Journal ArticleDOI
TL;DR: This survey collects, classify and review existing proposals in the area of formal methods for web security, spanning many different topics: JavaScript security, browser security, web application security, and web protocol analysis.

Proceedings ArticleDOI
25 Jun 2017
TL;DR: The concept of web archival labour is proposed to encompass and highlight the ways in which web archivists shape and maintain the preserved Web through work that is often embedded in and obscured by the complex technical arrangements of collection and access.
Abstract: This paper makes the case for studying the work of web archivists, in an effort to explore the ways in which practitioners shape the preservation and maintenance of the archived Web in its various forms. An ethnographic approach is taken through the use of observation, interviews and documentary sources over the course of several weeks in collaboration with web archivists, engineers and managers at the Internet Archive - a private, non-profit digital library that has been archiving the Web since 1996. The concept of web archival labour is proposed to encompass and highlight the ways in which web archivists (as both networked human and non-human agents) shape and maintain the preserved Web through work that is often embedded in and obscured by the complex technical arrangements of collection and access. As a result, this engagement positions web archives as places of knowledge and cultural production in their own right, revealing new insights into the performative nature of web archiving that have implications for how these data are used and understood.1

Journal ArticleDOI
TL;DR: It is established that even in the case of high-priority, simple-to-address accessibility requirements, colleges and universities generally fail to make their sites accessible.
Abstract: This study seeks to evaluate the basic Priority 1 web accessibility of all college and university websites in the US (n = 3141) Utilizing web scraping and automated content analysis, the study establishes that even in the case of high-priority, simple-to-address accessibility requirements, colleges and universities generally fail to make their sites accessible Results should be used to determine reasonable and simple steps for moving toward accessible design in institutional websites, which is necessary to ensure that institutional resources can be open and useable by all

Journal ArticleDOI
Xiaogang Ma1
TL;DR: A pilot study that uses a domain specific knowledge base and data visualization techniques to leverage the functionality of geoscience data services in the Web of Data and leverages the functionalities of existing standards into a new level forGeoscience applications.
Abstract: The geoscience community is now facing both the challenge and the opportunity caused by the vast amount of datasets that can be made available on the Web An efficient “data environment” on the Web has the potential to enable geoscientists to conduct their research in ways that never existed before Standards developed by the Open Geospatial Consortium have already been used widely to build data services among different subjects in geosciences In recent years, the Linked Open Data approach initiated by the World Wide Web Consortium has received increasing attention In this paper, the author presents a pilot study that uses a domain specific knowledge base and data visualization techniques to leverage the functionality of geoscience data services in the Web of Data The study focuses on the topic of the geologic time scale Detailed works such as semantic modeling and encoding, multilingual vocabularies, exploratory data visualization, web map service and processing, and the query of linked data are introduced through real-world datasets This study faces a broad perspective of the Linked Geoscience Data and leverages the functionalities of existing standards into a new level for geoscience applications

Proceedings ArticleDOI
05 Jun 2017
TL;DR: This paper explores the maturity of modern 3D web technologies in participatory urban planning through two real-world case studies and reports qualitative feedback from users and technical analysis of the applications in terms of download sizes, runtime performance and memory use.
Abstract: 3D Web is a potential platform for publishing and distributing 3D visualizations that have proven useful in enabling the participation of the general public in urban planning. However, technical requirements imposed by detailed and rich real-world plans and related functionalities are demanding for 3D web technologies. In this paper we explore the maturity of modern 3D web technologies in participatory urban planning through two real-world case studies. Applications built on Unity-based platform are published on the web to allow the general public to create, browse and comment on urban plans. The virtual models of seven urban development sites of different visual styles are optimized in terms of download sizes and memory use to be feasible on browsers used by the general public. We report qualitative feedback from users and present a technical analysis of the applications in terms of download sizes, runtime performance and memory use. We summarize the findings of the case studies into an assessment of the general feasibility of modern 3D web technologies in web-based urban planning.