scispace - formally typeset
Search or ask a question
Author

Aad van Moorsel

Other affiliations: Naresuan University, Imperial College London, Hewlett-Packard  ...read more
Bio: Aad van Moorsel is an academic researcher from Newcastle University. The author has contributed to research in topics: Web service & Smart contract. The author has an hindex of 27, co-authored 138 publications receiving 2605 citations. Previous affiliations of Aad van Moorsel include Naresuan University & Imperial College London.


Papers
More filters
Book ChapterDOI
Akhil Sahai1, Vijay Machiraju1, Mehmet Sayal1, Aad van Moorsel1, Fabio Casati1 
21 Oct 2002
TL;DR: An automated and distributed SLA monitoring engine that collects the right measurement, models the data and evaluates the SLA at certain times or when certain events happen is proposed.
Abstract: SLA monitoring is difficult to automate as it would need precise and unambiguous specification and a customizable engine that collects the right measurement, models the data and evaluates the SLA at certain times or when certain events happen Also most of the SLA neglect client side measurement or restrict SLAs to measurements based only on server side In a cross-enterprise scenario like web services it will be important to obtain measurements at multiple sites and to guarantee SLAs on them In this article we propose an automated and distributed SLA monitoring engine

338 citations

Proceedings ArticleDOI
26 Aug 2017
TL;DR: A systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective and identifies four key issues, namely, codifying, security, privacy and performance issues.
Abstract: An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.

212 citations

Proceedings ArticleDOI
TL;DR: In this paper, the authors conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective and identify current research topics and open challenges for future studies in smart contract research.
Abstract: An appealing feature of blockchain technology is smart contracts. A smart contract is executable code that runs on top of the blockchain to facilitate, execute and enforce an agreement between untrusted parties without the involvement of a trusted third party. In this paper, we conduct a systematic mapping study to collect all research that is relevant to smart contracts from a technical perspective. The aim of doing so is to identify current research topics and open challenges for future studies in smart contract research. We extract 24 papers from different scientific databases. The results show that about two thirds of the papers focus on identifying and tackling smart contract issues. Four key issues are identified, namely, codifying, security, privacy and performance issues. The rest of the papers focuses on smart contract applications or other smart contract related topics. Research gaps that need to be addressed in future studies are provided.

199 citations

Proceedings ArticleDOI
30 Oct 2017
TL;DR: In this paper, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat.
Abstract: Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.

109 citations

Proceedings ArticleDOI
01 Nov 2018
TL;DR: A systematic mapping study of all peer-reviewed technology-oriented research in smart contracts to identify how academic researchers have taken up smart contract technologies and established scientific outputs and to identify academic research trends and uptake.
Abstract: Blockchain based smart contracts are computer programs that encode an agreement between non-trusting participants. Smart contracts are executed on a blockchain system if specified conditions are met, without the need of a trusted third party. Blockchains and smart contracts have received increasing and booming attention in recent years, also in academic circles. We carry out a systematic mapping study of all peer-reviewed technology-oriented research in smart contracts. Our interest is twofold, namely to provide a survey of the scientific literature and to identify academic research trends and uptake. We only focus on peer-reviewed scientific publications, in order to identify how academic researchers have taken up smart contract technologies and established scientific outputs. We obtained all research papers from the main scientific databases, and using the systematic mapping method arrived at 188 relevant papers. We classified these papers into six categories, namely, security, privacy, software engineering, application, performance & scalability and other smart contract related topics. We found that the majority of the papers falls into the applications (about 64%) and software engineering (21%) categories. Compared to our 2017 survey [1], we observe that the number of relevant articles has increased about eightfold and shifted considerably towards applications of smart contracts.

105 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal Article
TL;DR: Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of the authors' brain’s wiring.
Abstract: In 1974 an article appeared in Science magazine with the dry-sounding title “Judgment Under Uncertainty: Heuristics and Biases” by a pair of psychologists who were not well known outside their discipline of decision theory. In it Amos Tversky and Daniel Kahneman introduced the world to Prospect Theory, which mapped out how humans actually behave when faced with decisions about gains and losses, in contrast to how economists assumed that people behave. Prospect Theory turned Economics on its head by demonstrating through a series of ingenious experiments that people are much more concerned with losses than they are with gains, and that framing a choice from one perspective or the other will result in decisions that are exactly the opposite of each other, even if the outcomes are monetarily the same. Prospect Theory led cognitive psychology in a new direction that began to uncover other human biases in thinking that are probably not learned but are part of our brain’s wiring.

4,351 citations

Journal Article
TL;DR: Thaler and Sunstein this paper described a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications, as a general approach to how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society.
Abstract: NUDGE: IMPROVING DECISIONS ABOUT HEALTH, WEALTH, AND HAPPINESS by Richard H. Thaler and Cass R. Sunstein Penguin Books, 2009, 312 pp, ISBN 978-0-14-311526-7This book is best described formally as a general explanation of and advocacy for libertarian paternalism, a term coined by the authors in earlier publications. Informally, it is about how leaders, systems, organizations, and governments can nudge people to do the things the nudgers want and need done for the betterment of the nudgees, or of society. It is paternalism in the sense that "it is legitimate for choice architects to try to influence people's behavior in order to make their lives longer, healthier, and better", (p. 5) It is libertarian in that "people should be free to do what they like - and to opt out of undesirable arrangements if they want to do so", (p. 5) The built-in possibility of opting out or making a different choice preserves freedom of choice even though people's behavior has been influenced by the nature of the presentation of the information or by the structure of the decisionmaking system. I had never heard of libertarian paternalism before reading this book, and I now find it fascinating.Written for a general audience, this book contains mostly social and behavioral science theory and models, but there is considerable discussion of structure and process that has roots in mathematical and quantitative modeling. One of the main applications of this social system is economic choice in investing, selecting and purchasing products and services, systems of taxes, banking (mortgages, borrowing, savings), and retirement systems. Other quantitative social choice systems discussed include environmental effects, health care plans, gambling, and organ donations. Softer issues that are also subject to a nudge-based approach are marriage, education, eating, drinking, smoking, influence, spread of information, and politics. There is something in this book for everyone.The basis for this libertarian paternalism concept is in the social theory called "science of choice", the study of the design and implementation of influence systems on various kinds of people. The terms Econs and Humans, are used to refer to people with either considerable or little rational decision-making talent, respectively. The various libertarian paternalism concepts and systems presented are tested and compared in light of these two types of people. Two foundational issues that this book has in common with another book, Network of Echoes: Imitation, Innovation and Invisible Leaders, that was also reviewed for this issue of the Journal are that 1 ) there are two modes of thinking (or components of the brain) - an automatic (intuitive) process and a reflective (rational) process and 2) the need for conformity and the desire for imitation are powerful forces in human behavior. …

3,435 citations

01 Nov 1981
TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.
Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations

Journal ArticleDOI
TL;DR: A taxonomy of research in self-adaptive software is presented, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area.
Abstract: Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-a properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges.

1,349 citations