scispace - formally typeset
Search or ask a question

Showing papers in "ACM Computing Surveys in 2014"


Journal ArticleDOI
TL;DR: The survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art and aims at providing a comprehensive introduction to the concept drift adaptation for researchers, industry analysts, and practitioners.
Abstract: Concept drift primarily refers to an online supervised learning scenario when the relation between the input data and the target variable changes over time. Assuming a general knowledge of supervised learning in this article, we characterize adaptive learning processes; categorize existing strategies for handling concept drift; overview the most representative, distinct, and popular techniques and algorithms; discuss evaluation methodology of adaptive algorithms; and present a set of illustrative applications. The survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art. Thus, it aims at providing a comprehensive introduction to the concept drift adaptation for researchers, industry analysts, and practitioners.

2,374 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition using on-body inertial sensors and describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems.
Abstract: The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.

1,214 citations


Journal ArticleDOI
TL;DR: A comprehensive introduction to a large body of research, more than 200 key references, is provided, with the aim of supporting the further development of recommender systems exploiting information beyond the U-I matrix.
Abstract: Over the past two decades, a large amount of research effort has been devoted to developing algorithms that generate recommendations. The resulting research progress has established the importance of the user-item (U-I) matrix, which encodes the individual preferences of users for items in a collection, for recommender systems. The U-I matrix provides the basis for collaborative filtering (CF) techniques, the dominant framework for recommender systems. Currently, new recommendation scenarios are emerging that offer promising new information that goes beyond the U-I matrix. This information can be divided into two categories related to its source: rich side information concerning users and items, and interaction information associated with the interplay of users and items. In this survey, we summarize and analyze recommendation scenarios involving information sources and the CF algorithms that have been recently developed to address them. We provide a comprehensive introduction to a large body of research, more than 200 key references, with the aim of supporting the further development of recommender systems exploiting information beyond the U-I matrix. On the basis of this material, we identify and discuss what we see as the central challenges lying ahead for recommender system technology, both in terms of extensions of existing techniques as well as of the integration of techniques and technologies drawn from other research areas.

777 citations


Journal ArticleDOI
TL;DR: This article presents an overview of existing map processing techniques, bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.
Abstract: Maps depict natural and human-induced changes on earth at a fine resolution for large areas and over long periods of time. In addition, maps—especially historical maps—are often the only information source about the earth as surveyed using geodetic techniques. In order to preserve these unique documents, increasing numbers of digital map archives have been established, driven by advances in software and hardware technologies. Since the early 1980s, researchers from a variety of disciplines, including computer science and geography, have been working on computational methods for the extraction and recognition of geographic features from archived images of maps (digital map processing). The typical result from map processing is geographic information that can be used in spatial and spatiotemporal analyses in a Geographic Information System environment, which benefits numerous research fields in the spatial, social, environmental, and health sciences. However, map processing literature is spread across a broad range of disciplines in which maps are included as a special type of image. This article presents an overview of existing map processing techniques, with the goal of bringing together the past and current research efforts in this interdisciplinary field, to characterize the advances that have been made, and to identify future research directions and opportunities.

674 citations


Journal ArticleDOI
TL;DR: The approach is to classify modern CPS Intrusion Detection System techniques based on two design dimensions: detection technique and audit material, and provides insight on the effectiveness of IDS techniques as they apply to CPSs.
Abstract: Pervasive healthcare systems, smart grids, and unmanned aircraft systems are examples of Cyber-Physical Systems (CPSs) that have become highly integrated in the modern world. As this integration deepens, the importance of securing these systems increases. In order to identify gaps and propose research directions in CPS intrusion detection research, we survey the literature of this area. Our approach is to classify modern CPS Intrusion Detection System (IDS) techniques based on two design dimensions: detection technique and audit material. We summarize advantages and drawbacks of each dimension’s options. We also summarize the most and least studied CPS IDS techniques in the literature and provide insight on the effectiveness of IDS techniques as they apply to CPSs. Finally, we identify gaps in CPS IDS research and suggest future research areas.

593 citations


Journal ArticleDOI
TL;DR: A classification of product-line analyses is proposed to enable systematic research and application in software-product-line engineering and develops a research agenda to guide future research on product- line analyses.
Abstract: Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses.

444 citations


Journal ArticleDOI
TL;DR: This survey initially discusses all the relevant aspects motivating cloud interoperability, and categorizes and identifies possible cloud interoperable scenarios and architectures, and discusses future directions and trends toward the holistic approach in this regard.
Abstract: A brief review of the Internet history reveals the fact that the Internet evolved after the formation of primarily independent networks. Similarly, interconnected clouds, also called Inter-cloud, can be viewed as a natural evolution of cloud computing. Recent studies show the benefits in utilizing multiple clouds and present attempts for the realization of an Inter-cloud or federated cloud environment. However, cloud vendors have not taken into account cloud interoperability issues, and each cloud comes with its own solution and interfaces for services. This survey initially discusses all the relevant aspects motivating cloud interoperability. Furthermore, it categorizes and identifies possible cloud interoperability scenarios and architectures. The spectrum of challenges and obstacles that the Inter-cloud realization is faced with are covered, a taxonomy of them is provided, and fitting enablers that tackle each challenge are identified. All these aspects require a comprehensive review of the state of the art, including ongoing projects and studies in the area. We conclude by discussing future directions and trends toward the holistic approach in this regard.

405 citations


Journal ArticleDOI
TL;DR: Techniques and solutions that aim to improve the energy efficiency of computing and network resources, and techniques that operate at a distributed system level are surveyed to foster research on schemes to make network and computing resources more efficient.
Abstract: The great amounts of energy consumed by large-scale computing and network systems, such as data centers and supercomputers, have been a major source of concern in a society increasingly reliant on information technology. Trying to tackle this issue, the research community and industry have proposed myriad techniques to curb the energy consumed by IT systems. This article surveys techniques and solutions that aim to improve the energy efficiency of computing and network resources. It discusses methods to evaluate and model the energy consumed by these resources, and describes techniques that operate at a distributed system level, trying to improve aspects such as resource allocation, scheduling, and network traffic management. This work aims to review the state of the art on energy efficiency and to foster research on schemes to make network and computing resources more efficient.

316 citations


Journal ArticleDOI
TL;DR: A survey of optimizations for stream processing, in a style similar to catalogs of design patterns or refactorings, to help future streaming system builders to stand on the shoulders of giants from not just their own community.
Abstract: Various research communities have independently arrived at stream processing as a programming model for efficient and parallel computing. These communities include digital signal processing, databases, operating systems, and complex event processing. Since each community faces applications with challenging performance requirements, each of them has developed some of the same optimizations, but often with conflicting terminology and unstated assumptions. This article presents a survey of optimizations for stream processing. It is aimed both at users who need to understand and guide the system’s optimizer and at implementers who need to make engineering tradeoffs. To consolidate terminology, this article is organized as a catalog, in a style similar to catalogs of design patterns or refactorings. To make assumptions explicit and help understand tradeoffs, each optimization is presented with its safety constraints (when does it preserve correctnessq) and a profitability experiment (when does it improve performanceq). We hope that this survey will help future streaming system builders to stand on the shoulders of giants from not just their own community.

314 citations


Journal ArticleDOI
TL;DR: The notion of provably secure searchable encryption (SE) is surveyed by giving a complete and comprehensive overview of the two main SE techniques: searchable symmetric encryption (SSE) and public key encryption with keyword search (PEKS).
Abstract: We survey the notion of provably secure searchable encryption (SE) by giving a complete and comprehensive overview of the two main SE techniques: searchable symmetric encryption (SSE) and public key encryption with keyword search (PEKS). Since the pioneering work of Song, Wagner, and Perrig (IEEE S&P '00), the field of provably secure SE has expanded to the point where we felt that taking stock would provide benefit to the community.The survey has been written primarily for the nonspecialist who has a basic information security background. Thus, we sacrifice full details and proofs of individual constructions in favor of an overview of the underlying key techniques. We categorize and compare the different SE schemes in terms of their security, efficiency, and functionality. For the experienced researcher, we point out connections between the many approaches to SE and identify open research problems.Two major conclusions can be drawn from our work. While the so-called IND-CKA2 security notion becomes prevalent in the literature and efficient (sublinear) SE schemes meeting this notion exist in the symmetric setting, achieving this strong form of security efficiently in the asymmetric setting remains an open problem. We observe that in multirecipient SE schemes, regardless of their efficiency drawbacks, there is a noticeable lack of query expressiveness that hinders deployment in practice.

313 citations


Journal ArticleDOI
TL;DR: This survey provides an overview of the vast literature on graph evolution analysis and the numerous applications that arise in different contexts.
Abstract: Evolutionary network analysis has found an increasing interest in the literature because of the importance of different kinds of dynamic social networks, email networks, biological networks, and social streams. When a network evolves, the results of data mining algorithms such as community detection need to be correspondingly updated. Furthermore, the specific kinds of changes to the structure of the network, such as the impact on community structure or the impact on network structural parameters, such as node degrees, also needs to be analyzed. Some dynamic networks have a much faster rate of edge arrival and are referred to as network streams or graph streams. The analysis of such networks is especially challenging, because it needs to be performed with an online approach, under the one-pass constraint of data streams. The incorporation of content can add further complexity to the evolution analysis process. This survey provides an overview of the vast literature on graph evolution analysis and the numerous applications that arise in different contexts.

Journal ArticleDOI
TL;DR: The literature is reviewed and the state of the art in fingerprint antispoofing is presented, which includes several countermeasures that discriminate between live fingerprints and spoof artifacts.
Abstract: Several issues related to the vulnerability of fingerprint recognition systems to attacks have been highlighted in the biometrics literature. One such vulnerability involves the use of artificial fingers, where materials such as Play-Doh, silicone, and gelatin are inscribed with fingerprint ridges. Researchers have demonstrated that some commercial fingerprint recognition systems can be deceived when these artificial fingers are placed on the sensor; that is, the system successfully processes the ensuing fingerprint images, thereby allowing an adversary to spoof the fingerprints of another individual. However, at the same time, several countermeasures that discriminate between live fingerprints and spoof artifacts have been proposed. While some of these antispoofing schemes are hardware based, several software-based approaches have been proposed as well. In this article, we review the literature and present the state of the art in fingerprint antispoofing.

Journal ArticleDOI
TL;DR: The data that can be collected from telecommunication networks as well as their strengths and weaknesses with a particular focus on urban sensing are outlined and recommendations on which datasets and techniques to use for specific urban sensing applications are provided.
Abstract: The recent development of telecommunication networks is producing an unprecedented wealth of information and, as a consequence, an increasing interest in analyzing such data both from telecoms and from other stakeholders' points of view In particular, mobile phone datasets offer access to insights into urban dynamics and human activities at an unprecedented scale and level of detail, representing a huge opportunity for research and real-world applications This article surveys the new ideas and techniques related to the use of telecommunication data for urban sensing We outline the data that can be collected from telecommunication networks as well as their strengths and weaknesses with a particular focus on urban sensing We survey existing filtering and processing techniques to extract insights from this data and summarize them to provide recommendations on which datasets and techniques to use for specific urban sensing applications Finally, we discuss a number of challenges and open research areas currently being faced in this field We strongly believe the material and recommendations presented here will become increasingly important as mobile phone network datasets are becoming more accessible to the research community

Journal ArticleDOI
TL;DR: This article defines a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users.
Abstract: Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1p and 1.5p of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.

Journal ArticleDOI
TL;DR: This work surveys the state of the art in the field of Vehicle Routing Problem research, summarizing problem combinations, constraints defined, and approaches found and concludes that the Rich VRP arises: combining multiple constraints for tackling realistic problems.
Abstract: The Vehicle Routing Problem (VRP) is a well-known research line in the optimization research community. Its different basic variants have been widely explored in the literature. Even though it has been studied for years, the research around it is still very active. The new tendency is mainly focused on applying this study case to real-life problems. Due to this trend, the Rich VRP arises: combining multiple constraints for tackling realistic problems. Nowadays, some studies have considered specific combinations of real-life constraints to define the emerging Rich VRP scopes. This work surveys the state of the art in the field, summarizing problem combinations, constraints defined, and approaches found.

Journal ArticleDOI
TL;DR: This survey reviews selected approaches, algorithms, and results on shortest-path queries from these fields, with the main focus lying on the tradeoff between the index size and the query time.
Abstract: We consider the point-to-point (approximate) shortest-path query problem, which is the following generalization of the classical single-source (SSSP) and all-pairs shortest-path (APSP) problems: we are first presented with a network (graph). A so-called preprocessing algorithm may compute certain information (a data structure or index) to prepare for the next phase. After this preprocessing step, applications may ask shortest-path or distance queries, which should be answered as fast as possible.Due to its many applications in areas such as transportation, networking, and social science, this problem has been considered by researchers from various communities (sometimes under different names): algorithm engineers construct fast route planning methods; database and information systems researchers investigate materialization tradeoffs, query processing on spatial networks, and reachability queries; and theoretical computer scientists analyze distance oracles and sparse spanners. Related problems are considered for compact routing and distance labeling schemes in networking and distributed computing and for metric embeddings in geometry as well.In this survey, we review selected approaches, algorithms, and results on shortest-path queries from these fields, with the main focus lying on the tradeoff between the index size and the query time. We survey methods for general graphs as well as specialized methods for restricted graph classes, in particular for those classes with arguable practical significance such as planar graphs and complex networks.

Journal ArticleDOI
TL;DR: A comprehensive survey and a structural understanding of online portfolio selection techniques published in the literature is provided and the relationship of these algorithms with the capital growth theory is discussed so as to better understand the similarities and differences of their underlying trading ideas.
Abstract: Online portfolio selection is a fundamental problem in computational finance, which has been extensively studied across several research communities, including finance, statistics, artificial intelligence, machine learning, and data mining. This article aims to provide a comprehensive survey and a structural understanding of online portfolio selection techniques published in the literature. From an online machine learning perspective, we first formulate online portfolio selection as a sequential decision problem, and then we survey a variety of state-of-the-art approaches, which are grouped into several major categories, including benchmarks, Follow-the-Winner approaches, Follow-the-Loser approaches, Pattern-Matching--based approaches, and Meta-Learning Algorithms. In addition to the problem formulation and related algorithms, we also discuss the relationship of these algorithms with the capital growth theory so as to better understand the similarities and differences of their underlying trading ideas. This article aims to provide a timely and comprehensive survey for both machine learning and data mining researchers in academia and quantitative portfolio managers in the financial industry to help them understand the state of the art and facilitate their research and practical applications. We also discuss some open issues and evaluate some emerging new trends for future research.

Journal ArticleDOI
TL;DR: A survey of the existing literature on temporal information retrieval is presented, categorize the relevant research, describe the main contributions, and compare different approaches to provide a coherent view of the field.
Abstract: Temporal information retrieval has been a topic of great interest in recent years Its purpose is to improve the effectiveness of information retrieval methods by exploiting temporal information in documents and queries In this article, we present a survey of the existing literature on temporal information retrieval In addition to giving an overview of the field, we categorize the relevant research, describe the main contributions, and compare different approaches We organize existing research to provide a coherent view, discuss several open issues, and point out some possible future research directions in this area Despite significant advances, the area lacks a systematic arrangement of prior efforts and an overview of state-of-the-art approaches Moreover, an effective end-to-end temporal retrieval system that exploits temporal information to improve the quality of the presented results remains undeveloped

Journal ArticleDOI
TL;DR: The results show that track and artist popularity can play a dominant role and that additional measures are required to better characterize and compare the quality of automatically generated playlists.
Abstract: Most of the time when we listen to music on the radio or on our portable devices, the order in which the tracks are played is governed by so-called playlists. These playlists are basically sequences of tracks that traditionally are designed manually and whose organization is based on some underlying logic or theme. With the digitalization of music and the availability of various types of additional track-related information on the Web, new opportunities have emerged on how to automate the playlist creation process. Correspondingly, a number of proposals for automated playlist generation have been made in the literature during the past decade. These approaches vary both with respect to which kind of data they rely on and which types of algorithms they use. In this article, we review the literature on automated playlist generation and categorize the existing approaches. Furthermore, we discuss the evaluation designs that are used today in research to assess the quality of the generated playlists. Finally, we report the results of a comparative evaluation of typical playlist generation schemes based on historical data. Our results show that track and artist popularity can play a dominant role and that additional measures are required to better characterize and compare the quality of automatically generated playlists.

Journal ArticleDOI
TL;DR: This article surveys the whole set of discrete Bayesian network classifiers devised to date, organized in increasing order of structure complexity: naive Bayes, selective naive Baye, seminaive Bayer, one-dependence Bayesian classifiers, k-dependency Bayesianclassifiers, Bayes network-augmented naiveBayes, Markov blanket-based Bayesian Classifier, unrestricted BayesianClassifiers, and Bayesian multinets.
Abstract: We have had to wait over 30 years since the naive Bayes model was first introduced in 1960 for the so-called Bayesian network classifiers to resurge. Based on Bayesian networks, these classifiers have many strengths, like model interpretability, accommodation to complex data and classification problem settings, existence of efficient algorithms for learning and classification tasks, and successful applicability in real-world problems. In this article, we survey the whole set of discrete Bayesian network classifiers devised to date, organized in increasing order of structure complexity: naive Bayes, selective naive Bayes, seminaive Bayes, one-dependence Bayesian classifiers, k-dependence Bayesian classifiers, Bayesian network-augmented naive Bayes, Markov blanket-based Bayesian classifier, unrestricted Bayesian classifiers, and Bayesian multinets. Issues of feature subset selection and generative and discriminative structure and parameter learning are also covered.

Journal ArticleDOI
TL;DR: Categorize and evaluate algorithms for visual search reranking, which reorders visual documents based on multimodal cues to improve initial text-only searches, and discuss relevant issues such as data collection, evaluation metrics, and benchmarking.
Abstract: The explosive growth and widespread accessibility of community-contributed media content on the Internet have led to a surge of research activity in multimedia search. Approaches that apply text search techniques for multimedia search have achieved limited success as they entirely ignore visual content as a ranking signal. Multimedia search reranking, which reorders visual documents based on multimodal cues to improve initial text-only searches, has received increasing attention in recent years. Such a problem is challenging because the initial search results often have a great deal of noise. Discovering knowledge or visual patterns from such a noisy ranked list to guide the reranking process is difficult. Numerous techniques have been developed for visual search re-ranking. The purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. We conclude with several promising directions for future research.

Journal ArticleDOI
TL;DR: This article aims to provide a comprehensive review of a wide range of proposals and systems that focusing fundamentally on the support of distributed data management and processing using the MapReduce framework.
Abstract: MapReduce is a framework for processing and managing large-scale datasets in a distributed cluster, which has been used for applications such as generating search indexes, document clustering, access log analysis, and various other forms of data analytics. MapReduce adopts a flexible computation model with a simple interface consisting of map and reduce functions whose implementations can be customized by application developers. Since its introduction, a substantial amount of research effort has been directed toward making it more usable and efficient for supporting database-centric operations. In this article, we aim to provide a comprehensive review of a wide range of proposals and systems that focusing fundamentally on the support of distributed data management and processing using the MapReduce framework.

Journal ArticleDOI
TL;DR: This work reviews different methods for human activity recognition, classified as data-driven and knowledge-based techniques, and establishes an evaluation criterion to assess the suitability of the different candidate ontologies for this purpose.
Abstract: Describing user activity plays an essential role in ambient intelligence. In this work, we review different methods for human activity recognition, classified as data-driven and knowledge-based techniques. We focus on context ontologies whose ultimate goal is the tracking of human behavior. After studying upper and domain ontologies, both useful for human activity representation and inference, we establish an evaluation criterion to assess the suitability of the different candidate ontologies for this purpose. As a result, any missing features, which are relevant for modeling daily human behaviors, are identified as future challenges.

Journal ArticleDOI
TL;DR: An overview of the characteristics of VSN applications, the involved security threats and attack scenarios, and the major security challenges is presented, and a central contribution of this survey is the classification of V SN security aspects into data-centric, node-centred, network-focused, and user-centric security.
Abstract: Visual sensor networks (VSNs) are receiving a lot of attention in research, and at the same time, commercial applications are starting to emerge. VSN devices come with image sensors, adequate processing power, and memory. They use wireless communication interfaces to collaborate and jointly solve tasks such as tracking persons within the network. VSNs are expected to replace not only many traditional, closed-circuit surveillance systems but also to enable emerging applications in scenarios such as elderly care, home monitoring, or entertainment. In all of these applications, VSNs monitor a potentially large group of people and record sensitive image data that might contain identities of persons, their behavior, interaction patterns, or personal preferences. These intimate details can be easily abused, for example, to derive personal profiles. The highly sensitive nature of images makes security and privacy in VSNs even more important than in most other sensor and data networks. However, the direct use of security techniques developed for related domains might be misleading due to the different requirements and design challenges. This is especially true for aspects such as data confidentiality and privacy protection against insiders, generating awareness among monitored people, and giving trustworthy feedback about recorded personal data—all of these aspects go beyond what is typically required in other applications. In this survey, we present an overview of the characteristics of VSN applications, the involved security threats and attack scenarios, and the major security challenges. A central contribution of this survey is our classification of VSN security aspects into data-centric, node-centric, network-centric, and user-centric security. We identify and discuss the individual security requirements and present a profound overview of related work for each class. We then discuss privacy protection techniques and identify recent trends in VSN security and privacy. A discussion of open research issues concludes this survey.

Journal ArticleDOI
TL;DR: The aim of this survey is to provide researchers with knowledge of the state of the art in GPU power management and motivate them to architect highly energy-efficient GPUs of tomorrow.
Abstract: Recent years have witnessed phenomenal growth in the computational capabilities and applications of GPUs. However, this trend has also led to a dramatic increase in their power consumption. This article surveys research works on analyzing and improving energy efficiency of GPUs. It also provides a classification of these techniques on the basis of their main research idea. Further, it attempts to synthesize research works that compare the energy efficiency of GPUs with other computing systems (e.g., FPGAs and CPUs). The aim of this survey is to provide researchers with knowledge of the state of the art in GPU power management and motivate them to architect highly energy-efficient GPUs of tomorrow.

Journal ArticleDOI
TL;DR: A survey of migration mechanisms developed to provide high availability inside virtualized environments is reported and these approaches are classified in three main classes: process migration, memory migration, and suspend/resume migration.
Abstract: In the virtualization area, replication has been considered as a mechanism to provide high availability. A high-availability system should be active most of the time, and this is the reason that its design should consider almost zero downtime and a minimal human intervention if a recovery process is demanded. Several migration and replication mechanisms have been developed to provide high availability inside virtualized environments. In this article, a survey of migration mechanisms is reported. These approaches are classified in three main classes: process migration, memory migration, and suspend/resume migration.

Journal ArticleDOI
TL;DR: An overview of the major design trends of software-defined networking is presented and key differences between them are highlighted to highlight key differences in the architecture of these networks.
Abstract: Software-defined networking (SDN) has generated tremendous interest from both academia and industry. SDN aims at simplifying network management while enabling researchers to experiment with network protocols on deployed networks. This article is a distillation of the state of the art of SDN in the context of wireless networks. We present an overview of the major design trends and highlight key differences between them.

Journal ArticleDOI
TL;DR: A classification of deduplication systems according to six criteria that correspond to key design decisions: granularity, locality, timing, indexing, technique, and scope is identified and describes the different approaches used for each.
Abstract: The automatic elimination of duplicate data in a storage system, commonly known as deduplication, is increasingly accepted as an effective technique to reduce storage costs. Thus, it has been applied to different storage types, including archives and backups, primary storage, within solid-state drives, and even to random access memory. Although the general approach to deduplication is shared by all storage types, each poses specific challenges and leads to different trade-offs and solutions. This diversity is often misunderstood, thus underestimating the relevance of new research and development. The first contribution of this article is a classification of deduplication systems according to six criteria that correspond to key design decisions: granularity, locality, timing, indexing, technique, and scope. This classification identifies and describes the different approaches used for each of them. As a second contribution, we describe which combinations of these design decisions have been proposed and found more useful for challenges in each storage type. Finally, outstanding research challenges and unexplored design points are identified and discussed.

Journal ArticleDOI
TL;DR: This article reviews the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis, and summarizes the models and algorithms for low-Rank matrix recovery and illustrates their advantages and limitations with numerical experiments.
Abstract: Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions.

Journal ArticleDOI
TL;DR: The fundamental idea of OR and its important issues are explained, and different protocols from each category are illustrated and compared to improve the transmission reliability and network throughput.
Abstract: Opportunistic Routing (OR) is a new promising paradigm that has been proposed for wireless networks OR has gained a lot of attention from the research communities for its ability to increase the performance of wireless networks It benefits from the broadcast characteristic of wireless mediums to improve network performance The basic function of OR is its ability to overhear the transmitted packet and to coordinate among relaying nodes In OR, a candidate set is a potential group of nodes that is selected as the next-hop forwarders Hence, each node in OR can use different potential paths to send packets toward the destination Any of the candidates of a node that have received the transmitted packet may forward it The decision of choosing the next forwarder is made by coordination between candidates that have successfully received the transmitted packet In OR, by using a dynamic relay node to forward the packet, the transmission reliability and network throughput can be increased In this article, we explain the fundamental idea of OR and its important issues by providing some examples We then categorize each of the important issues and explain them in detail Furthermore, we illustrate different protocols from each category and compare their benefits and drawbacks Finally, some potential directions for future research in OR is explained