Preprint submitted to Internet of Things 18 October 2019 1
Transformative Effects of IoT, Blockchain and Artificial
Intelligence on Cloud Computing: Evolution, Vision, Trends and
Open Challenges
Sukhpal Singh Gill
1*
, Shreshth Tuli
2
, Minxian Xu
3
, Inderpreet Singh
4,17
, Karan Vijay Singh
5,18
, Dominic Lindsay
6
, Shikhar
Tuli
7
, Daria Smirnova
6
, Manmeet Singh
8,9
, Udit Jain
2
, Haris Pervaiz
6
, Bhanu Sehgal
10
, Sukhwinder Singh Kaila
11
, Sanjay
Misra
12,13
, Mohammad Sadegh Aslanpour
14
,
Harshit Mehta
15,19
, Vlado Stankovski
16
and Peter Garraghan
6
1
School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Rd, Bethnal Green,
London E1 4NS, UK
2
Department of Computer Science and Engineering, Indian Institute of Technology (IIT), Delhi, India
3
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
4
Department of Computer Science, Simon Fraser University, Canada
5
Department of Computer Science, University of Waterloo, Canada
6
School of Computing and Communications, Lancaster University, UK
7
Department of Electrical Engineering, Indian Institute of Technology (IIT), Delhi, India
8
Centre for Climate Change Research, Indian Institute of Tropical Meteorology (IITM), Pune, India
9
Interdisciplinary Programme (IDP) in Climate Studies, Indian Institute of Technology (IIT), Bombay, India
10
Accenture, Melbourne, Australia
11
Cvent India Inc, Gurugram, India
12
Department of Electrical and Information Engineering, Covenant University, Ota, Nigeria
13
Department of Computer Engineering, Atılım University, Ankara, Turkey
14
Young Researchers and Elite Club, Jahrom Branch, Islamic Azad University, Jahrom, Iran
15
Walker Department of Mechanical Engineering, Cockrell School of Engineering, The University of Texas at Austin, Texas, USA
16
Faculty of Civil and Geodetic Engineering, University of Ljubljana, Ljubljana, Slovenia
17
1Qbit, Vancouver, Canada
18
Amazon, Toronto, Canada
19
Dell Technologies, Austin, TX, USA
s.s.gill@qmul.ac.uk, shreshth.cs116@cse.iitd.ac.in, mx.xu@siat.ac.cn, inderpreet_singh@sfu.ca, kv3singh@uwaterloo.ca, d.lindsay4@lancaster.ac.uk, shikhartuli98@gmail.com,
d.smirnova@lancaster.ac.uk, manmeet.cat@tropmet.res.in, udit.cs116@cse.iitd.ac.in, h.b.pervaiz@lancaster.ac.uk, bhanu.sehgal@accenture.com, skaila@cvent.com,
sanjay.misra@covenantuniversity.edu.ng, aslanpour.sadegh@jia.ac.ir, harshit.mehta@utexas.edu, vlado.stankovski@fgg.uni-lj.si, p.garraghan@lancaster.ac.uk
*
Corresponding Author
Abstract
Cloud computing plays a critical role in modern society and enables a range of applications from infrastructure to
social media. Such system must cope with varying load and evolving usage reflecting societies’ interaction and
dependency on automated computing systems whilst satisfying Quality of Service (QoS) guarantees. Enabling
these systems are a cohort of conceptual technologies, synthesised to meet demand of evolving computing
applications. In order to understand current and future challenges of such system, there is a need to identify key
technologies enabling future applications. In this study, we aim to explore how three emerging paradigms
(Blockchain, IoT and Artificial Intelligence) will influence future cloud computing systems. Further, we identify
several technologies driving these paradigms and invite international experts to discuss the current status and
future directions of cloud computing. Finally, we proposed a conceptual model for cloud futurology to explore
the influence of emerging paradigms and technologies on evolution of cloud computing.
Keywords: Cloud Computing, Quality of Service, Cloud Applications, Cloud Paradigms and Technologies, IoT,
Blockchain, Artificial Intelligence
1. Introduction
The last two decades have seen active research in the definition and evolution of cloud computing. Driven by
innovation in networking and distributed architectures, cloud computing is a manifestation of distributed systems
research since the initial conception of the client server model in 1958 [2]. Due to the rapid growth of cloud
computing, it has been adopted as an important utility across all aspects of society, from academia, governmental
institutions and industry. Characteristics of cloud computing such as dynamic, metered access to a shared pools
of computing resources [1] have enabled the realisation of new technologies and paradigms to fulfil the demands
of emerging applications including scientific, healthcare, agriculture, smart city, and traffic management [3].
Presently, well-known cloud providers such as Facebook, Google and Amazon utilize large-scale Cloud Data
Centers (CDCs) to provision heterogeneous Quality of Service (QoS) requirements. Furthermore, cloud
computing platforms are able to provide a unified interface over heterogenous resource found in the Internet of
Preprint submitted to Internet of Things 18 October 2019 2
Things (IoT)-based applications which improve the reliability of cloud services [4]. There is a need to sign a
Service Level Agreement (SLA) between cloud user and provider to deliver the required service in specified time
and budget based on QoS parameters.
Substantial growth in end-user demand and data volume has resulted in the creation of more CDCs at ever growing
scale, which in turn increases system energy consumption, CO
2
emissions, and waste heat that requires cooling
infrastructure for removal. To address the problem of energy consumption, there is a need for new resource
scheduling policies to reduce the energy consumption without impacting QoS such as deadline, reliability,
availability, cost, security and privacy [5]. To increase the reliability of the cloud computing systems, there is a
need to develop new fault tolerant mechanisms, which can maintain the cloud service quality during the
occurrence of hardware or software faults. Moreover, security can be improved by using new technology called
Blockchain (it is distributed ledgers within Cloud) to protect the communication from attackers, which can further
increase reliability of the computing systems [46].
The diversity of large distributed application means there is a requirement for effective big data analytics
mechanisms to process the required data in an efficient manner using innovative data processing techniques [61]
[66]. Further, new programming models such as serverless computing enable new patterns of resource
consumption, autonomically driven by application utilization. Lightweight virtualization provided by container
technologies, can improve utilization in clouds, and enable low latency provisioning of application environments
[150]. Further, the emergence of fog computing reduces the latency and response time of processing in IoT devices
but still research challenges within this domain are not solved effectively. New resource provisioning and
scheduling polices are required for fog and cloud computing using Artificial Intelligence (AI) based deep learning
techniques to predict the resource requirement in advance for geographically disparate resources [47] [140]. Cloud
Computing is emerging as a new tool for solving the complex challenges faced by the Earth Sciences researchers
both in the context of compute and analysis [139]. It offers a promising hope for the community which presently
relies on dedicated supercomputers coming at a huge cost and can regularly go through slag periods or
inefficiencies. The introduction of cloud computing as a supplicant or replacement of dedicated supercomputers
is an interesting hypothesis. Due to continuous growing research in the field of cloud computing, there are various
new research areas such as quantum computing, software-defined network, software engineering, bitcoin
currency, 5G network and beyond are emerged.
1.1 Our Contributions
Earlier methodical surveys and system reviews have been identified previous innovations, however innovations
in the field of cloud computing require a revisit of paradigms (IoT, AI, and Blockchain) driving cloud computing.
There is a requirement for a systematic review to evaluate, upgrade, and integrate the existing research presented
in this field with respect to the emerging paradigms and technologies such as IoT, AI and Blockchain. This
systematic review presents an updated study to evaluate and discover the research challenges based on the
available existing research along with the evolution and history of computing systems as per new frontiers as an
amalgamation of these technologies having a high impact on cloud computing and related domains. Finally, we
offer critical insights and points out possible future work. We proposed a conceptual model which integrates and
enables computation using a plethora of technological advancements and provides an enhanced and holistic setup
for next generation computing environments.
The motivation behind this systematic review is to study the history of computing and identify how the emergence
of triumvirate “IoT + AI + Blockchain” will transform cloud computing to solve complex problems of next
generation computing. Further, the international experts of different cloud computing research areas come
together and discuss the existing research and proposed future research directions for academicians, practitioners
and researchers working in the field of cloud computing. This is the first systematic review which explores the
evolution of computing paradigms and technologies and the influence of triumvirate (blockchain, IoT and
Artificial Intelligence) to the evolution of cloud computing.
The rest of the article is structured as follows: Section 2 presents the background of cloud computing paradigms
and techniques and their evolution. Section 3 presents the drivers (IoT, AI and Blockchain) of cloud computing.
Section 4 presents the impact of new paradigms and technologies on cloud computing along with their future
research opportunities and open challenges. Section 5 presents the insights of triumvirate to the cloud computing
evolution. Section 6 presents a conceptual model for cloud futurology. Finally, Section 7 summarizes the research
article.
Preprint submitted to Internet of Things 18 October 2019 3
2. Background: History of Decades
Computing systems have evolved from year 1958 to improve the use of hardware resources in an efficient way.
During these decades of computing, there have been various types of computing paradigms and technologies have
been developed and invented, which contributes extensively to the current research in the field of computing.
2.1 Evolution of Computing Paradigms and Technologies: A Journey
Initially, one system can execute one specific task at a time and multiple systems are needed to run parallelly to
execute multiple tasks concurrently [1]. A secure communication network is required to exchange data among
different computing systems. Figure 1 shows the evolution of computing technology along with their objectives
and focus of study from year 1958.
• Client Server: This is distribution application or centralized system developed in 1960 to divide the
workloads or tasks among resource providers (servers) and clients are service requesters [1]. Computer
network is used to communicate between servers and clients and server shares resources with clients to
execute their workloads in a load balancing manner [2]. Email and world wide web (www) are two
important examples of client server model. In this model, clients cannot communicate with each other
directly [7].
Figure 1: Evolution of Computing Paradigms and Technologies
• Supercomputer: It is a system with high performance computing capability to execute computationally
intensive tasks in different scientific fields such as molecular modelling, climate research and quantum
mechanics [3]. Energy usage and heat management in supercomputers remained a main research challenge
thought its evolution since 1960 [7]. The important examples of supercomputers are Multivac, HAL-9000
and The Machine Stops [2].
• Proprietary Mainframe: This is large-high speed computer, which can further support various devices and
workstations, are used to process large amount of data such as transaction processing, consumer statistics
Client
Server
Super-
computer
Proprietary
Mainframe
Cluster
Computing
Open MPP
and SMP
Grid
Computing
Commodity
Clusters
Peer to
Peer
Web
Services
Virtualized
Clusters
HPC
System
IaaS, PaaS
and SaaS
Cloud
Computing
Fog
Computing
Internet of
Things
Edge
Computing
1958
1962
1960
1967
1952
1973
1965
1969
1989
1994
1990
2000
1939
2011
2009
2011
1960
1964
1965
1977
1980
1990
1995
1995
1995
2001
2005
2007
2012
2012
2013
2000
Talked
Year
Adopted
Year
Resource Sharing
Computationally Intensive Tasks
To execute many Operating Systems
Integrated Computing Resources
Parallel Processing
To solve a single task using distributed systems
Massive Computing Resources
Partitions tasks among peers
Integrating web-based Apps
VM communicated using Network
Increased Processing Speed
Resource Provisioning and Scheduling
Virtualization, Cost Reduction, Energy Management
Closer proximity to end-users and IoT devices
Things to connect, interact and exchange data
Geographic distribution of computing nodes
Parallel Execution
Transaction Processing
High Availability
Load Balancing
File Sharing using Network
High Execution Speed
Fast Computation at low cost
High Processing Capability
Communication b/w Systems
High availability of servers
High Availability and Scalability
On-demand network access
Resource Sharing
Reduce Latency and Response Time
Easy Data access and Time Saving
Reduce transmission costs and improving QoS
Technology
Focus of Study
Objective
Preprint submitted to Internet of Things 18 October 2019 4
and census [4]. Mainframe computers can provide reliability and security and achieves high throughput
through virtualization [7]. In year 2017, IBM launched latest version of mainframe IBM z14 [2]. The
performance of mainframe computer is excellent, but these computers are quite expensive.
• Cluster Computing: This technology uses fast local area network to communicate available computing
nodes and clustering middleware is used to make coordination among different computing nodes [5]. The
main objective of cluster computing is to execute a single task using different interconnected computing
nodes to improve the performance of computing environment [1].
• Open Massively Parallel Processing (MPP) and Symmetric Multi-Processing (SMP): There are two main
types of parallel processing environments: massively parallel processing (MPP) and symmetric
multiprocessing (SMP) systems [1] [2] [3]. In SMP environment, other hardware resources (disk space,
memory) are shared by multiple processors but using a single operating system. The sharing of resources
effects the computing speed of the completion of a particular job [7]. In MPP environment, only file system
is shared but there is no sharing of resources during execution of job. The scalability can be improved by
adding computers and related disk and memory resources.
• Grid Computing: This technology enables to achieve a common objective using distributed computing
resources and executes non-interactive workloads which contains huge number of files [4] [5]. The single
grid is dedicated to an execution of a specific application [7]. Grid computing provides services such as
resource allocation and management service, secure infrastructure and monitoring and discovery service.
• Commodity Clusters: It is also called Commodity Cluster Computing, which offers low cost computation
of user workloads by using huge numbers of computing resources in a concurrent manner [2] [4]. Different
vendors are using open standards to manufacture commodity computers to reduce the variation among
products of vendors [7]. Presently, off-the-shelf commodity computers are available to fulfil the business
computing requirements quickly.
• Peer to Peer: It is a distributed architecture to divide the workload or task among different peers or
computing nodes and peers can communicate with each other directly at application layer [4] [7]. In Peer
to peer architecture, peers can access different resources such as processing power, disk storage or network
bandwidth without the requirement of central coordinator. TCP/IP network is using to exchange data among
peers. The main applications of peer to peer architecture are multimedia, file-sharing networks and content
delivery.
• Web Services: This technology enables the communication among different electronic devices through
world wide web using different types of machine-readable file formats such as JavaScript Object Notation
(JSON) and Extensible Markup Language (XML) [1] [2]. Basically, web service provides the user interface
to end user for the interaction with database server [7].
• Virtualised Clusters: It is an implementation of a real computing system to perform similar functions using
virtualized environment [1]. Virtualised cluster enables the sharing of resources among different Virtual
Machines (VM) to execute workloads or tasks [2]. VM hypervisor provides the software layer-based
virtualisation to execute on top of operating system or on the bare metal [7]. VM based computing systems
saves resource cost and executes the large number of workloads using same resources.
• HPC System: This is the tool which is used to solve large problems (which requires high computing power)
of business, engineering and science [5]. High Performance Computing (HPC) system contains different
types of computing resources to solve different types of problems and access these resources is controlled
by batch system or scheduler [5] [7]. HPC systems are sharing resources and it can access different
resources remotely and execute workloads or tasks using scheduling of parallel resources.
• IaaS, PaaS, SaaS: There are different types of web services, which can be accessed via Internet such as
SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) [6]. SaaS
offers software functionality as a service without any maintenance and initial cost with high quality and an
example of SaaS is Gmail. PaaS offers the framework, where user can deploy their application with
required scalability and an example of PaaS is Microsoft. IaaS offers infrastructure resources such as
network, memory, storage and processor to execute workloads or tasks in a cost and time optimized manner
and an example of IaaS is Amazon.
• Cloud Computing: The cloud services are generally denoted by – XaaS where X = {I, P, S…} and practice
of using remote resources to execute user tasks (processing, management and storage of data) through
Internet [6]. Cloud computing enables sharing of resources to reduce execution cost and increase
availability of service. There are four different types of cloud computing models: public, private, hybrid
and community. The Quality of Service (QoS) parameters such as reliability, security and energy efficiency
are important to deliver an efficient cloud service.
• Fog Computing: This is latest architecture which performs significant amount of storage and computation
using end devices or fog nodes and Internet is used to establish communication among these devices [151].
Fog computing comprise of data plane and control plane [6]. Data plane provide services at the edge of
network to reduce latency and increase QoS, while control plane is part of router and decides network
Preprint submitted to Internet of Things 18 October 2019 5
topology [8]. Further, fog computing supports Internet of Things (IoT) devices such as mobile phones,
sensors, health monitoring devices.
• Internet of Things: IoT devices are network devices such as actuators, software, home appliances and
sensors and Internet connectivity is used to exchange data among these network devices [8]. There are
number of applications of IoT in different fields such as agriculture, healthcare, weather forecasting,
transportation, smart home and industrial robotics [152] [153].
• Edge Computing: It is a distributed computing paradigm, which performs computation on distributed edge
devices and it enables the data collection and communication over network [6]. Further, edge computing
moves the large volume of data by processing at edge devices instead of cloud server, which improves the
QoS, reduce latency and transmission cost [8] [154] [155]. The time sensitive applications can take more
advantage from edge computing, but it needs continuous Internet connection to perform dedicated functions
within given time.
3. Triumvirate: IoT + AI + Blockchain
Cloud computing becomes an intelligent computing with the emergence of innovative technologies and paradigms
such as Internet of Things, Blockchain and Artificial Intelligence.
3.1 Internet of Things (IoT)
The modern Internet integrates objects known as Things, equipped with sensing, actuating and networking
capabilities with dynamic monitoring and control services. Such devices are pervasive in modern life and can be
found in homes, public transport, motorways and vehicles [22]. As such, IoT applications are able to operate
across heterogeneous domains and enable rich analyses and management of complex interactions [150]. Thus,
IoT devices and service are able to address challenges in a wide range of application domains, including e-health,
infrastructure, building management systems, manufacturing and transport [23] [24] [25].
The Internet of Things possess several characteristics central to their operation, including (I) Systems are often
highly dynamic and network membership must cope with volatility, where a device may appear and reaper across
several networks [24], (II) devices are highly heterogenous in terms of both computing performance and functional
capabilities, and as such system must cope with limited processing, memory and persistent storage [26], and (III)
Systems are managed and controlled by multiple stakeholders, requiring federated mechanisms for secure
management of collected IoT data [25].
Historically IoT applications have offloaded processing, and persistent storage to cloud services, however as the
number of ‘Things’ grows, these services fail to support real time demand of IoT devices [24], [27]. This is
because such systems operate in physical environments, across large geographic ranges, and as such require low
latency response times, and have high density data ingestion requirements/bandwidths [46].
Fog/Edge computing extends cloud system boundaries, by decentralising resource orchestration from datacentres
to edge networks [25]. Organised as hierarchal networks of Fog nodes or cloudlets [28] providing deployment of
ingestion, processing and management services. Geographic locality allows lower response latencies and
increase’s ingestion bandwidth by horizontally scaling resources, whilst consuming less energy and enabling
resource mobility when compared to cloud services. These characteristics enable IoT applications to scale in terms
of a both logical scale and geographic range, whilst providing real-time response latencies, and as such Fog/Edge
computing can be considered a future architecture of IoT applications [23].
Smart e-health applications are able to monitor patient data in real-time, by collecting data from implantable and
wearable devices forming personal area networks [61]. Smart-Gateways collect and perform local processing of
data collected from devices, including noise filtering from medical devices, data compression and fusion, and
analyses allowing detection of dangerous trends in a patient’s health. Whilst long term trends can be analysed at
cloud layers [22], [29]. Furthermore, Fog enabled IoT systems are adaptable and can change their behaviour
according to state determined by collected sensors’ data. For instance, a smart gateway collecting samples from a
pacemaker can increase its sample prior to a heart attack, detected via pre-processing at the fog layer [22], [25],
[29].
The Internet of Energy (IoE) paradigm introduces the notion of smart grids and energy management [30]. In which
distributed networks energy generators capable of monitoring power consumption and generator, or battery
capacity and providing coarse grained statistics about grid health. Whilst ‘Smart-Meters’ are able to monitor