scispace - formally typeset
Search or ask a question
Author

Wentong Cai

Bio: Wentong Cai is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Grid computing & High-level architecture. The author has an hindex of 33, co-authored 367 publications receiving 5088 citations. Previous affiliations of Wentong Cai include Hebrew University of Jerusalem & Technische Universität München.


Papers
More filters
Journal ArticleDOI
TL;DR: An innovative Transparent Adaptation approach and associated supporting techniques that can be used to convert existing and new single-user applications into collaborative ones, without changing the source code of the original application are reported.
Abstract: Single-user interactive computer applications are pervasive in our daily lives and work. Leveraging single-user applications for supporting multi-user collaboration has the potential to significantly increase the availability and improve the usability of collaborative applications. In this article, we report an innovative Transparent Adaptation (TA) approach and associated supporting techniques that can be used to convert existing and new single-user applications into collaborative ones, without changing the source code of the original application. The cornerstone of the TA approach is the operational transformation (OT) technique and the method of adapting the single-user application programming interface to the data and operation models of OT. This approach and supporting techniques were developed and tested in the process of transparently converting two commercial off-the-shelf single-user applications (Microsoft Word and PowerPoint) into real-time collaborative applications, called CoWord and CoPowerPoint, respectively. CoWord and CoPowerPoint not only retain the functionalities and “look-and-feel” of their single-user counterparts, but also provide advanced multi-user collaboration capabilities for supporting multiple interaction paradigms, ranging from concurrent and free interaction to sequential and synchronized interaction, and for supporting detailed workspace awareness, including multi-user telepointers and radar views. The TA approach and generic collaboration engine software component developed from this work are potentially applicable and reusable in adapting a wide range of single-user applications.

198 citations

Journal ArticleDOI
TL;DR: A two-dimensional categorization mechanism is proposed to classify existing work depending on the size of crowds and the time-scale of the crowd phenomena of interest, and four evaluation criteria have been introduced to evaluate existing crowd simulation systems.
Abstract: As a collective and highly dynamic social group, the human crowd is a fascinating phenomenon that has been frequently studied by experts from various areas. Recently, computer-based modeling and simulation technologies have emerged to support investigation of the dynamics of crowds, such as a crowd's behaviors under normal and emergent situations. This article assesses the major existing technologies for crowd modeling and simulation. We first propose a two-dimensional categorization mechanism to classify existing work depending on the size of crowds and the time-scale of the crowd phenomena of interest. Four evaluation criteria have also been introduced to evaluate existing crowd simulation systems from the point of view of both a modeler and an end-user.We have discussed some influential existing work in crowd modeling and simulation regarding their major features, performance as well as the technologies used in this work. We have also discussed some open problems in the area. This article will provide the researchers with useful information and insights on the state of the art of the technologies in crowd modeling and simulation as well as future research directions.

177 citations

Proceedings ArticleDOI
01 May 1999
TL;DR: The results show that the proposed auto-adaptive dead reckoning algorithm can achieve considerable reduction in update packets without sacrificing accuracy in extrapolation.
Abstract: This paper describes a new, auto-adaptive algorithm for dead reckoning in DIS. In general dead-reckoning algorithms use a fixed threshold to control the extrapolation errors. Since a fixed threshold cannot adequately handle the dynamic relationships between moving entities, a multi-level threshold scheme is proposed. The definition of threshold levels is based on the concepts of area of interest (AOI) and sensitive region (SR), and the levels of threshold are adaptively adjusted based on the relative distance between entities during the simulation. Various experiments were conducted. The results show that the proposed auto-adaptive dead reckoning algorithm can achieve considerable reduction in update packets without sacrificing accuracy in extrapolation.

133 citations

Journal ArticleDOI
TL;DR: This work clearly shows how the characteristic parameters of a DVE are interrelated in deciding the time-space inconsistency, so that it may fine-tune the DVE to make it as consistent as possible.
Abstract: Maintaining a consistent view of the simulated world among different simulation nodes is a fundamental problem in large-scale distributed virtual environments (DVEs). In this paper, we characterize this problem by quantifying the time-space inconsistency in a DVE. To this end, a metric is defined to measure the time-space inconsistency in a DVE. One major advantage of the metric is that it may be estimated based on some characteristic parameters of a DVE, such as clock asynchrony, message transmission delay, the accuracy of the dead reckoning algorithm, the kinetics of the moving entity, and human factors. Thus the metric can be used to evaluate the time-space consistency property of a DVE without the actual execution of the DVE application, which is especially useful in the design stage of a DVE. Our work also clearly shows how the characteristic parameters of a DVE are interrelated in deciding the time-space inconsistency, so that we may fine-tune the DVE to make it as consistent as possible. To verify the effectiveness of the metric, a Ping-Pong game is developed. Experimental results show that the metric is effective in evaluating the time-space consistency property of the game.

122 citations


Cited by
More filters
Journal ArticleDOI

3,628 citations

09 Mar 2012
TL;DR: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems as mentioned in this paper, and they have been widely used in computer vision applications.
Abstract: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods. † Correspondence: Chung-Ming Kuan, Institute of Economics, Academia Sinica, 128 Academia Road, Sec. 2, Taipei 115, Taiwan; ckuan@econ.sinica.edu.tw. †† I would like to express my sincere gratitude to the editor, Professor Steven Durlauf, for his patience and constructive comments on early drafts of this entry. I also thank Shih-Hsun Hsu and Yu-Lieh Huang for very helpful suggestions. The remaining errors are all mine.

2,069 citations

Journal ArticleDOI
TL;DR: The results of a simulation study that opens some new research tensions on the impact of COVID-19 (SARS-CoV-2) on the global SCs are presented and an analysis for observing and predicting both short-term and long-term impacts of epidemic outbreaks on the SCs along with managerial insights are offered.
Abstract: Epidemic outbreaks are a special case of supply chain (SC) risks which is distinctively characterized by a long-term disruption existence, disruption propagations (i.e., the ripple effect), and high uncertainty. We present the results of a simulation study that opens some new research tensions on the impact of COVID-19 (SARS-CoV-2) on the global SCs. First, we articulate the specific features that frame epidemic outbreaks as a unique type of SC disruption risks. Second, we demonstrate how simulation-based methodology can be used to examine and predict the impacts of epidemic outbreaks on the SC performance using the example of coronavirus COVID-19 and anyLogistix simulation and optimization software. We offer an analysis for observing and predicting both short-term and long-term impacts of epidemic outbreaks on the SCs along with managerial insights. A set of sensitivity experiments for different scenarios allows illustrating the model's behavior and its value for decision-makers. The major observation from the simulation experiments is that the timing of the closing and opening of the facilities at different echelons might become a major factor that determines the epidemic outbreak impact on the SC performance rather than an upstream disruption duration or the speed of epidemic propagation. Other important factors are lead-time, speed of epidemic propagation, and the upstream and downstream disruption durations in the SC. The outcomes of this research can be used by decision-makers to predict the operative and long-term impacts of epidemic outbreaks on the SCs and develop pandemic SC plans. Our approach can also help to identify the successful and wrong elements of risk mitigation/preparedness and recovery policies in case of epidemic outbreaks. The paper is concluded by summarizing the most important insights and outlining future research agenda.

1,282 citations

Book
01 Jan 2000
TL;DR: The article gives an overview of technologies to distribute the execution of simulation programs over multiple computer systems, with particular emphasis on synchronization (also called time management) algorithms as well as data distribution techniques.
Abstract: Originating from basic research conducted in the 1970's and 1980's, the parallel and distributed simulation field has matured over the last few decades. Today, operational systems have been fielded for applications such as military training, analysis of communication networks, and air traffic control systems, to mention a few. The article gives an overview of technologies to distribute the execution of simulation programs over multiple computer systems. Particular emphasis is placed on synchronization (also called time management) algorithms as well as data distribution techniques.

1,217 citations

01 Jan 2013

1,098 citations