scispace - formally typeset
Search or ask a question

Showing papers on "Cloud computing published in 2015"


Journal ArticleDOI
01 Jan 2015
TL;DR: This paper presents an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications, and presents the key building blocks of an SDN infrastructure using a bottom-up, layered approach.
Abstract: The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

3,589 citations


Journal ArticleDOI
TL;DR: The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced, and research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance.

2,141 citations


Journal ArticleDOI
TL;DR: A general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G Cellular network architecture.
Abstract: In the near future, i.e., beyond 4G, some of the prime objectives or demands that need to be addressed are increased capacity, improved data rate, decreased latency, and better quality of service. To meet these demands, drastic improvements need to be made in cellular network architecture. This paper presents the results of a detailed survey on the fifth generation (5G) cellular network architecture and some of the key emerging technologies that are helpful in improving the architecture and meeting the demands of users. In this detailed survey, the prime focus is on the 5G cellular network architecture, massive multiple input multiple output technology, and device-to-device communication (D2D). Along with this, some of the emerging technologies that are addressed in this paper include interference management, spectrum sharing with cognitive radio, ultra-dense networks, multi-radio access technology association, full duplex radios, millimeter wave solutions for 5G cellular networks, and cloud technologies for 5G radio access networks and software defined networks. In this paper, a general probable 5G cellular network architecture is proposed, which shows that D2D, small cell access points, network cloud, and the Internet of Things can be a part of 5G cellular network architecture. A detailed survey is included regarding current research projects being conducted in different countries by research groups and institutions that are working on 5G technologies.

1,899 citations


Journal ArticleDOI
TL;DR: This paper surveys the state-of-the-art literature on C-RAN and can serve as a starting point for anyone willing to understand C- RAN architecture and advance the research on the network.
Abstract: Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-user's needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on baseband resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This paper surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RAN.

1,516 citations


Posted Content
TL;DR: This paper designs a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics.
Abstract: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.

1,272 citations


Proceedings ArticleDOI
21 Jun 2015
TL;DR: The definition of fog computing and similar concepts are discussed, representative application scenarios are introduced, and various aspects of issues the authors may encounter when designing and implementing fog computing systems are identified.
Abstract: Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.

1,217 citations


Proceedings ArticleDOI
29 Mar 2015
TL;DR: This paper explores the performance of traditional virtual machine (VM) deployments, and contrast them with the use of Linux containers, using KVM as a representative hypervisor and Docker as a container manager.
Abstract: Cloud computing makes extensive use of virtual machines because they permit workloads to be isolated from one another and for the resource usage to be somewhat controlled. In this paper, we explore the performance of traditional virtual machine (VM) deployments, and contrast them with the use of Linux containers. We use KVM as a representative hypervisor and Docker as a container manager. Our results show that containers result in equal or better performance than VMs in almost all cases. Both VMs and containers require tuning to support I/Ointensive applications. We also discuss the implications of our performance results for future cloud architectures.

1,065 citations


Journal ArticleDOI
TL;DR: A generally accepted definition for SDN is presented, including decoupling the control plane from the data plane and providing programmability for network application development, and its three-layer architecture is dwelled on, including an infrastructure layer, a control layer, and an application layer.
Abstract: Emerging mega-trends (e.g., mobile, social, cloud, and big data) in information and communication technologies (ICT) are commanding new challenges to future Internet, for which ubiquitous accessibility, high bandwidth, and dynamic management are crucial. However, traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone, and they cannot fully utilize the capability of physical network infrastructure. Recently, software-defined networking (SDN) has been touted as one of the most promising solutions for future Internet. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. As a result, SDN is positioned to provide more efficient configuration, better performance, and higher flexibility to accommodate innovative network designs. This paper surveys latest developments in this active research area of SDN. We first present a generally accepted definition for SDN with the aforementioned two characteristic features and potential benefits of SDN. We then dwell on its three-layer architecture, including an infrastructure layer, a control layer, and an application layer, and substantiate each layer with existing research efforts and its related research areas. We follow that with an overview of the de facto SDN implementation (i.e., OpenFlow). Finally, we conclude this survey paper with some suggested open research challenges.

894 citations


Journal ArticleDOI
30 Sep 2015
TL;DR: This position paper position that a new shift is necessary in computing, taking the control of computing applications, data, and services away from some central nodes to the other logical extreme of the Internet, and refers to this vision of human-centered edge-device based computing as Edge-centric Computing.
Abstract: In many aspects of human activity, there has been a continuous struggle between the forces of centralization and decentralization. Computing exhibits the same phenomenon; we have gone from mainframes to PCs and local networks in the past, and over the last decade we have seen a centralization and consolidation of services and applications in data centers and clouds. We position that a new shift is necessary. Technological advances such as powerful dedicated connection boxes deployed in most homes, high capacity mobile end-user devices and powerful wireless networks, along with growing user concerns about trust, privacy, and autonomy requires taking the control of computing applications, data, and services away from some central nodes (the "core") to the other logical extreme (the "edge") of the Internet. We also position that this development can help blurring the boundary between man and machine, and embrace social computing in which humans are part of the computation and decision making loop, resulting in a human-centered system design. We refer to this vision of human-centered edge-device based computing as Edge-centric Computing. We elaborate in this position paper on this vision and present the research challenges associated with its implementation.

844 citations


Journal ArticleDOI
TL;DR: This paper discusses approaches and environments for carrying out analytics on Clouds for Big Data applications, and identifies possible gaps in technology and provides recommendations for the research community on future directions on Cloud-supported Big Data computing and analytics solutions.

773 citations


Journal ArticleDOI
TL;DR: This survey considers robots and automation systems that rely on data or code from a network to support their operation, i.e., where not all sensing, computation, and memory is integrated into a standalone system.
Abstract: The Cloud infrastructure and its extensive set of Internet-accessible resources has potential to provide significant benefits to robots and automation systems. We consider robots and automation systems that rely on data or code from a network to support their operation, i.e., where not all sensing, computation, and memory is integrated into a standalone system. This survey is organized around four potential benefits of the Cloud: 1) Big Data: access to libraries of images, maps, trajectories, and descriptive data; 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning; 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes; and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also improve robots and automation systems by providing access to: a) datasets, publications, models, benchmarks, and simulation tools; b) open competitions for designs and systems; and c) open-source software. This survey includes over 150 references on results and open challenges. A website with new developments and updates is available at: http://goldberg.berkeley.edu/cloud-robotics/

Journal ArticleDOI
Xu Chen1
TL;DR: This paper designs a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution and demonstrates that the proposed mechanism can achieve efficient computation off loading performance and scale well as the system size increases.
Abstract: Mobile cloud computing is envisioned as a promising approach to augment computation capabilities of mobile devices for emerging resource-hungry mobile applications. In this paper, we propose a game theoretic approach for achieving efficient computation offloading for mobile cloud computing. We formulate the decentralized computation offloading decision making problem among mobile device users as a decentralized computation offloading game. We analyze the structural property of the game and show that the game always admits a Nash equilibrium. We then design a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution. Numerical results demonstrate that the proposed mechanism can achieve efficient computation offloading performance and scale well as the system size increases.

Journal ArticleDOI
22 Jun 2015
TL;DR: In this article, the authors considered an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server and formulated the offloading problem as the joint optimization of the radio resources and the computational resources to minimize the overall users' energy consumption, while meeting latency constraints.
Abstract: Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources—the transmit precoding matrices of the MUs—and the computational resources—the CPU cycles/second assigned by the cloud to each MU—in order to minimize the overall users’ energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.

Journal ArticleDOI
TL;DR: The security issues that arise due to the very nature of cloud computing are detailed and the recent solutions presented in the literature to counter the security issues are presented.

Journal ArticleDOI
TL;DR: The notion of shielded execution is introduced, which protects the confidentiality and integrity of a program and its data from the platform on which it runs (i.e., the cloud operator’s OS, VM, and firmware).
Abstract: Today’s cloud computing infrastructure requires substantial trust. Cloud users rely on both the provider’s staff and its globally distributed software/hardware platform not to expose any of their private data.We introduce the notion of shielded execution, which protects the confidentiality and integrity of a program and its data from the platform on which it runs (i.e., the cloud operator’s OS, VM, and firmware). Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, and also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification.

Proceedings ArticleDOI
17 May 2015
TL;DR: VC3 is the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results.
Abstract: We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.

Proceedings ArticleDOI
12 Nov 2015
TL;DR: This paper discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition, analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications.
Abstract: Despite the broad utilization of cloud computing, some applications and services still cannot benefit from this popular computing paradigm due to inherent problems of cloud computing such as unacceptable latency, lack of mobility support and location-awareness. As a result, fog computing, has emerged as a promising infrastructure to provide elastic resources at the edge of network. In this paper, we have discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition. We also analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications. We finally implemented and evaluated a prototype fog computing platform.


Journal ArticleDOI
TL;DR: Two of the information technology adoption models are integrated to improve predictive power of resulting model and can be used as a guideline to ensure a positive outcome of the cloud computing adoption in organizations.
Abstract: – The purpose of this paper is to integrate TAM model and TOE framework for cloud computing adoption at organizational level. , – A conceptual framework was developed using technological and organizational variables of TOE framework as external variables of TAM model while environmental variables were proposed to have direct impact on cloud computing adoption. A questionnaire was used to collect the data from 280 companies in IT, manufacturing and finance sectors in India. The data were analyzed using exploratory and confirmatory factor analyses. Further, structural equation modeling was used to test the proposed model. , – The study identified relative advantage, compatibility, complexity, organizational readiness, top management commitment, and training and education as important variables for affecting cloud computing adoption using perceived ease of use (PEOU) and perceived usefulness (PU) as mediating variables. Also, competitive pressure and trading partner support were found directly affecting cloud computing adoption intentions. The model explained 62 percent of cloud computing adoption. , – The model can be used as a guideline to ensure a positive outcome of the cloud computing adoption in organizations. It also provides relevant recommendations to achieve conducive implementation environment for cloud computing adoption. , – This study integrates two of the information technology adoption models to improve predictive power of resulting model.

Book
21 May 2015
TL;DR: With DNA sequencing now getting cheaper more quickly than data storage or computation, the time may have come for genome informatics to migrate to the cloud.
Abstract: With DNA sequencing now getting cheaper more quickly than data storage or computation, the time may have come for genome informatics to migrate to the cloud.

Proceedings ArticleDOI
Nikko Strom1
06 Sep 2015
TL;DR: It is shown empirically that the method can reduce the amount of communication by three orders of magnitude while training a typical DNN for acoustic modelling, and enables efficient scaling to more parallel GPU nodes than any other method that is aware of.
Abstract: We introduce a new method for scaling up distributed Stochastic Gradient Descent (SGD) training of Deep Neural Networks (DNN). The method solves the well-known communication bottleneck problem that arises for data-parallel SGD because compute nodes frequently need to synchronize a replica of the model. We solve it by purposefully controlling the rate of weight-update per individual weight, which is in contrast to the uniform update-rate customarily imposed by the size of a mini-batch. It is shown empirically that the method can reduce the amount of communication by three orders of magnitude while training a typical DNN for acoustic modelling. This reduction in communication bandwidth enables efficient scaling to more parallel GPU nodes than any other method that we are aware of, and it can be achieved with neither loss in convergence rate nor accuracy in the resulting DNN. Furthermore, the training can be performed on commodity cloud infrastructure and networking.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: A thorough study of the NFV location problem is performed, it is shown that it introduces a new type of optimization problems, and near optimal approximation algorithms guaranteeing a placement with theoretically proven performance are provided.
Abstract: Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.

Journal ArticleDOI
TL;DR: This paper outlines a conceptual framework for cloud resource management and uses it to structure the state-of-the-art review, and identifies five challenges for future investigation that relate to providing predictable performance for cloud-hosted applications.
Abstract: Resource management in a cloud environment is a hard problem, due to: the scale of modern data centers; the heterogeneity of resource types and their interdependencies; the variability and unpredictability of the load; as well as the range of objectives of the different actors in a cloud ecosystem. Consequently, both academia and industry began significant research efforts in this area. In this paper, we survey the recent literature, covering 250+ publications, and highlighting key results. We outline a conceptual framework for cloud resource management and use it to structure the state-of-the-art review. Based on our analysis, we identify five challenges for future investigation. These relate to: providing predictable performance for cloud-hosted applications; achieving global manageability for cloud systems; engineering scalable resource management systems; understanding economic behavior and cloud pricing; and developing solutions for the mobile cloud paradigm .

Journal ArticleDOI
TL;DR: A H-CRAN is presented in this article as the advanced wireless access network paradigm, where cloud computing is used to fulfill the centralized large-scale cooperative processing for suppressing co-channel interferences.
Abstract: Compared with fourth generation cellular systems, fifth generation wireless communication systems are anticipated to provide spectral and energy efficiency growth by a factor of at least 10, and the area throughput growth by a factor of at least 25. To achieve these goals, a H-CRAN is presented in this article as the advanced wireless access network paradigm, where cloud computing is used to fulfill the centralized large-scale cooperative processing for suppressing co-channel interferences. The state-of-the-art research achievements in the areas of system architecture and key technologies for H-CRANs are surveyed. Particularly, Node C as a new communication entity is defined to converge the existing ancestral base stations and act as the base band unit pool to manage all accessed remote radio heads. Also, the software-defined H-CRAN system architecture is presented to be compatible with software-defined networks. The principles, performance gains, and open issues of key technologies, including adaptive large-scale cooperative spatial signal processing, cooperative radio resource management, network function virtualization, and self-organization, are summarized. The major challenges in terms of fronthaul constrained resource allocation optimization and energy harvesting that may affect the promotion of H-CRANs are discussed as well.

Journal ArticleDOI
01 Oct 2015
TL;DR: The realization of a cloud workload prediction module for SaaS providers based on the autoregressive integrated moving average (ARIMA) model is presented and its accuracy of future workload prediction is evaluated using real traces of requests to Web servers.
Abstract: As companies shift from desktop applications to cloud-based software as a service (SaaS) applications deployed on public clouds, the competition for end-users by cloud providers offering similar services grows. In order to survive in such a competitive market, cloud-based companies must achieve good quality of service (QoS) for their users, or risk losing their customers to competitors. However, meeting the QoS with a cost-effective amount of resources is challenging because workloads experience variation over time. This problem can be solved with proactive dynamic provisioning of resources, which can estimate the future need of applications in terms of resources and allocate them in advance, releasing them once they are not required. In this paper, we present the realization of a cloud workload prediction module for SaaS providers based on the autoregressive integrated moving average (ARIMA) model. We introduce the prediction based on the ARIMA model and evaluate its accuracy of future workload prediction using real traces of requests to web servers. We also evaluate the impact of the achieved accuracy in terms of efficiency in resource utilization and QoS. Simulation results show that our model is able to achieve an average accuracy of up to 91 percent, which leads to efficiency in resource utilization with minimal impact on the QoS.

Book ChapterDOI
10 Aug 2015
TL;DR: Fog computing is a promising computing paradigm that extends cloud computing to the edge of networks but with distinct characteristics that faces new security and privacy challenges besides those inherited from cloud computing.
Abstract: Fog computing is a promising computing paradigm that extends cloud computing to the edge of networks. Similar to cloud computing but with distinct characteristics, fog computing faces new security and privacy challenges besides those inherited from cloud computing. In this paper, we have surveyed these challenges and corresponding solutions in a brief manner.

Journal ArticleDOI
TL;DR: This article comprehensively surveys recent advances in fronthaul-constrained CRANs, including system architectures and key techniques, including compression and quantization, large-scale coordinated processing and clustering, and resource allocation optimization.
Abstract: As a promising paradigm for fifth generation wireless communication systems, cloud radio access networks (C-RANs) have been shown to reduce both capital and operating expenditures, as well as to provide high spectral efficiency (SE) and energy efficiency (EE). The fronthaul in such networks, defined as the transmission link between the baseband unit and the remote radio head, requires a high capacity, but is often constrained. This article comprehensively surveys recent advances in fronthaul-constrained CRANs, including system architectures and key techniques. Particularly, major issues relating to the impact of the constrained fronthaul on SE/EE and quality of service for users, including compression and quantization, large-scale coordinated processing and clustering, and resource allocation optimization, are discussed together with corresponding potential solutions. Open issues in terms of software-defined networking, network function virtualization, and partial centralization are also identified.

Journal ArticleDOI
TL;DR: A novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane is presented that facilitates the utilization of new communication and computer techniques.
Abstract: In the era of mobile Internet, mobile operators are facing pressure on ever-increasing capital expenditures and operating expenses with much less growth of income. Cloud Radio Access Network (C-RAN) is expected to be a candidate of next generation access network techniques that can solve operators' puzzle. In this article, on the basis of a general survey of C-RAN, we present a novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane. Compared to traditional architecture, the proposed C-RAN architecture emphasizes the notion of service cloud, service-oriented resource scheduling and management, thus it facilitates the utilization of new communication and computer techniques. With the extensive computation resource offered by the cloud platform, a coordinated user scheduling algorithm and parallel optimum precoding scheme are proposed, which can achieve better performance. The proposed scheme opens another door to design new algorithms matching well with C-RAN architecture, instead of only migrating existing algorithms from traditional architecture to C-RAN.

Journal ArticleDOI
TL;DR: Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources, then paints a landscape of the scheduling problem and solutions, and a comprehensive survey of state-of-the-art approaches is presented systematically.
Abstract: A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon.

Proceedings ArticleDOI
28 Dec 2015
TL;DR: Electrocardiogram feature extraction is chosen as the case study as it plays an important role in diagnosis of many cardiac diseases and fog computing helps achieving more than 90% bandwidth efficiency and offering low-latency real time response at the edge of the network.
Abstract: Internet of Things technology provides a competent and structured approach to improve health and wellbeing of mankind. One of the feasible ways to offer healthcare services based on IoT is to monitor human's health in real-time using ubiquitous health monitoring systems which have the ability to acquire bio-signals from sensor nodes and send the data to the gateway via a particular wireless communication protocol. The real-time data is then transmitted to a remote cloud server for real-time processing, visualization, and diagnosis. In this paper, we enhance such a health monitoring system by exploiting the concept of fog computing at smart gateways providing advanced techniques and services such as embedded data mining, distributed storage, and notification service at the edge of network. Particularly, we choose Electrocardiogram (ECG) feature extraction as the case study as it plays an important role in diagnosis of many cardiac diseases. ECG signals are analyzed in smart gateways with features extracted including heart rate, P wave and T wave via a flexible template based on a lightweight wavelet transform mechanism. Our experimental results reveal that fog computing helps achieving more than 90% bandwidth efficiency and offering low-latency real time response at the edge of the network.