scispace - formally typeset
Search or ask a question
Author

W. K. Chan

Bio: W. K. Chan is an academic researcher from City University of Hong Kong. The author has contributed to research in topics: Test case & Test suite. The author has an hindex of 35, co-authored 169 publications receiving 3988 citations. Previous affiliations of W. K. Chan include Hong Kong University of Science and Technology & University of Hong Kong.


Papers
More filters
Proceedings ArticleDOI
03 Sep 2018
TL;DR: ContractFuzzer is presented, a novel fuzzer to test Ethereum smart contracts for security vulnerabilities that successfully detects the vulnerability of the DAO contract that leads to $60 million loss and the vulnerabilities of Parity Wallet that have led to the loss of $30 million and the freezing of $150 million worth of Ether.
Abstract: Decentralized cryptocurrencies feature the use of blockchain to transfer values among peers on networks without central agency Smart contracts are programs running on top of the blockchain consensus protocol to enable people make agreements while minimizing trusts Millions of smart contracts have been deployed in various decentralized applications The security vulnerabilities within those smart contracts pose significant threats to their applications Indeed, many critical security vulnerabilities within smart contracts on Ethereum platform have caused huge financial losses to their users In this work, we present ContractFuzzer, a novel fuzzer to test Ethereum smart contracts for security vulnerabilities ContractFuzzer generates fuzzing inputs based on the ABI specifications of smart contracts, defines test oracles to detect security vulnerabilities, instruments the EVM to log smart contracts runtime behaviors, and analyzes these logs to report security vulnerabilities Our fuzzing of 6991 smart contracts has flagged more than 459 vulnerabilities with high precision In particular, our fuzzing tool successfully detects the vulnerability of the DAO contract that leads to USD 60 million loss and the vulnerabilities of Parity Wallet that have led to the loss of USD 30 million and the freezing of USD 150 million worth of Ether

467 citations

Proceedings ArticleDOI
09 Dec 2008
TL;DR: This paper compares cloud computing with service computing and pervasive computing based on the classic model of computer architecture, and draws up a series of research questions in cloud computing for future exploration.
Abstract: Cloud computing is an emerging computing paradigm. It aims to share data, calculations, and services transparently among users of a massive grid. Although the industry has started selling cloud-computing products, research challenges in various areas, such as UI design, task decomposition, task distribution, and task coordination, are still unclear. Therefore, we study the methods to reason and model cloud computing as a step toward identifying fundamental research questions in this paradigm. In this paper, we compare cloud computing with service computing and pervasive computing. Both the industry and research community have actively examined these three computing paradigms. We draw a qualitative comparison among them based on the classic model of computer architecture. We finally evaluate the comparison results and draw up a series of research questions in cloud computing for future exploration.

266 citations

Proceedings ArticleDOI
16 Nov 2009
TL;DR: This paper proposes a new family of Coverage-based ART techniques and shows empirically that they are statistically superior to the RT-based technique in detecting faults and one of the ART prioritization techniques is consistently comparable to some of the best coverage-based prioritizing techniques and yet involves much less time cost.
Abstract: Regression testing assures changed programs against unintended amendments. Rearranging the execution order of test cases is a key idea to improve their effectiveness. Paradoxically, many test case prioritization techniques resolve tie cases using the random selection approach, and yet random ordering of test cases has been considered as ineffective. Existing unit testing research unveils that adaptive random testing (ART) is a promising candidate that may replace random testing (RT). In this paper, we not only propose a new family of coverage-based ART techniques, but also show empirically that they are statistically superior to the RT-based technique in detecting faults. Furthermore, one of the ART prioritization techniques is consistently comparable to some of the best coverage-based prioritization techniques (namely, the "additional" techniques) and yet involves much less time cost.

246 citations

Proceedings ArticleDOI
16 May 2009
TL;DR: This paper refine code coverage of test runs using control- and data-flow patterns prescribed by different fault types so that this extra information can strengthen the correlations between program failures and the coverage of faulty program entities, making it easier for fault localization techniques to locate the faults.
Abstract: Recent techniques for fault localization leverage code coverage to address the high cost problem of debugging. These techniques exploit the correlations between program failures and the coverage of program entities as the clue in locating faults. Experimental evidence shows that the effectiveness of these techniques can be affected adversely by coincidental correctness, which occurs when a fault is executed but no failure is detected. In this paper, we propose an approach to address this problem. We refine code coverage of test runs using control- and data-flow patterns prescribed by different fault types. We conjecture that this extra information, which we call context patterns, can strengthen the correlations between program failures and the coverage of faulty program entities, making it easier for fault localization techniques to locate the faults. To evaluate the proposed approach, we have conducted a mutation analysis on three real world programs and cross-validated the results with real faults. The experimental results consistently show that coverage refinement is effective in easing the coincidental correctness problem in fault localization techniques.

142 citations

Journal ArticleDOI
TL;DR: In this paper, the authors propose a metamorphic approach for online services testing, where the off-line testing determines a set of successful test cases to construct their corresponding follow-up test cases for the online testing.
Abstract: Testing the correctness of services assures the functional quality of service-oriented applications. A service-oriented application may bind dynamically to its supportive services. For the same service interface, the supportive services may behave differently. A service may also need to realize a business strategy, like best pricing, relative to the behavior of its counterparts and the dynamic market situations. Many existing works ignore these issues to address the problem of identifying failures from test results. This article proposes a metamorphic approach for online services testing. The off-line testing determines a set of successful test cases to construct their corresponding follow-up test cases for the online testing. These test cases will be executed by metamorphic services that encapsulate the services under test as well as the implementations of metamorphic relations. Thus, any failure revealed by the metamorphic testing approach will be due to the failures in the online testing mode. An experiment is included. Copyright

114 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 1997
TL;DR: In this paper, the authors examine the implications of electronic shopping for consumers, retailers, and manufacturers, assuming that near-term technological developments will offer consumers unparalleled opportunities to locate and compare product offerings.
Abstract: The authors examine the implications of electronic shopping for consumers, retailers, and manufacturers. They assume that near-term technological developments will offer consumers unparalleled opportunities to locate and compare product offerings. They examine these advantages as a function of typical consumer goals and the types of products and services being sought and offer conclusions regarding consumer incentives and disincentives to purchase through interactive home shopping vis-à-vis traditional retail formats. The authors discuss implications for industry structure as they pertain to competition among retailers, competition among manufacturers, and retailer-manufacturer relationships.

2,077 citations

Journal ArticleDOI
TL;DR: A general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G Cellular network architecture.
Abstract: In the near future, i.e., beyond 4G, some of the prime objectives or demands that need to be addressed are increased capacity, improved data rate, decreased latency, and better quality of service. To meet these demands, drastic improvements need to be made in cellular network architecture. This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users. In this detailed survey, the prime focus is on the 5G cellular network architecture, massive multiple input multiple output technology, and device-to-device communication (D2D). Along with this, some of the emerging technologies that are addressed in this paper include interference management, spectrum sharing with cognitive radio, ultra-dense networks, multi-radio access technology association, full duplex radios, millimeter wave solutions for 5G cellular networks, and cloud technologies for 5G radio access networks and software defined networks. In this paper, a general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G cellular network architecture. A detailed survey is included regarding current research projects being conducted in different countries by research groups and institutions that are working on 5G technologies.

1,899 citations

Journal ArticleDOI
TL;DR: This paper provides an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud Computing, and presents a taxonomy based on the key issues in this area, and discusses the different approaches taken to tackle these issues.

1,671 citations