scispace - formally typeset
Search or ask a question

Showing papers in "ACM Computing Surveys in 2019"


Journal ArticleDOI
TL;DR: A comprehensive review of recent research efforts on deep learning-based recommender systems is provided in this paper, along with a comprehensive summary of the state-of-the-art.
Abstract: With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.

1,070 citations


Journal ArticleDOI
TL;DR: A comprehensive review of deep learning-based image captioning techniques can be found in this article, where the authors discuss the foundation of the techniques to analyze their performances, strengths, and limitations.
Abstract: Generating a description of an image is called image captioning. Image captioning requires recognizing the important objects, their attributes, and their relationships in an image. It also needs to generate syntactically and semantically correct sentences. Deep-learning-based techniques are capable of handling the complexities and challenges of image captioning. In this survey article, we aim to present a comprehensive review of existing deep-learning-based image captioning techniques. We discuss the foundation of the techniques to analyze their performances, strengths, and limitations. We also discuss the datasets and the evaluation metrics popularly used in deep-learning-based automatic image captioning.

564 citations


Journal ArticleDOI
Tal Ben-Nun1, Torsten Hoefler1
TL;DR: The problem of parallelization in DNNs is described from a theoretical perspective, followed by approaches for its parallelization, and potential directions for parallelism in deep learning are extrapolated.
Abstract: Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.

433 citations


Journal ArticleDOI
TL;DR: This article presents a comprehensive overview of the security and privacy of blockchain with respect to concept, attributes, techniques, and systems, and describes the basic security properties that are supported as the essential requirements and building blocks for Bitcoin-like cryptocurrency systems.
Abstract: Blockchain offers an innovative approach to storing information, executing transactions, performing functions, and establishing trust in an open environment. Many consider blockchain as a technology breakthrough for cryptography and cybersecurity, with use cases ranging from globally deployed cryptocurrency systems like Bitcoin, to smart contracts, smart grids over the Internet of Things, and so forth. Although blockchain has received growing interests in both academia and industry in the recent years, the security and privacy of blockchains continue to be at the center of the debate when deploying blockchain in different applications. This article presents a comprehensive overview of the security and privacy of blockchain. To facilitate the discussion, we first introduce the notion of blockchains and its utility in the context of Bitcoin-like online transactions. Then, we describe the basic security properties that are supported as the essential requirements and building blocks for Bitcoin-like cryptocurrency systems, followed by presenting the additional security and privacy properties that are desired in many blockchain applications. Finally, we review the security and privacy techniques for achieving these security properties in blockchain-based systems, including representative consensus algorithms, hash chained storage, mixing protocols, anonymous signatures, non-interactive zero-knowledge proof, and so forth. We conjecture that this survey can help readers to gain an in-depth understanding of the security and privacy of blockchain with respect to concept, attributes, techniques, and systems.

430 citations


Journal ArticleDOI
TL;DR: A comprehensive survey of emerging computing paradigms from the perspective of end-edge-cloud orchestration is presented to discuss state-of-the-art research in terms of computation offloading, caching, security, and privacy.
Abstract: Sending data to the cloud for analysis was a prominent trend during the past decades, driving cloud computing as a dominant computing paradigm. However, the dramatically increasing number of devices and data traffic in the Internet-of-Things (IoT) era are posing significant burdens on the capacity-limited Internet and uncontrollable service delay. It becomes difficult to meet the delay-sensitive and context-aware service requirements of IoT applications by using cloud computing alone. Facing these challenges, computing paradigms are shifting from the centralized cloud computing to distributed edge computing. Several new computing paradigms, including Transparent Computing, Mobile Edge Computing, Fog Computing, and Cloudlet, have emerged to leverage the distributed resources at network edge to provide timely and context-aware services. By integrating end devices, edge servers, and cloud, they form a hierarchical IoT architecture, i.e., End-Edge-Cloud orchestrated architecture to improve the performance of IoT systems. This article presents a comprehensive survey of these emerging computing paradigms from the perspective of end-edge-cloud orchestration. Specifically, we first introduce and compare the architectures and characteristics of different computing paradigms. Then, a comprehensive survey is presented to discuss state-of-the-art research in terms of computation offloading, caching, security, and privacy. Finally, some potential research directions are envisioned for fostering continuous research efforts.

402 citations


Journal ArticleDOI
TL;DR: This survey gives a comprehensive review of the signal processing techniques, algorithms, applications, and performance results of WiFi sensing with CSI, and presents three future WiFi sensing trends, i.e., integrating cross-layer network information, multi-device cooperation, and fusion of different sensors for enhancing existing WiFi sensing capabilities and enabling new WiFi sensing opportunities.
Abstract: With the high demand for wireless data traffic, WiFi networks have experienced very rapid growth, because they provide high throughput and are easy to deploy. Recently, Channel State Information (CSI) measured by WiFi networks is widely used for different sensing purposes. To get a better understanding of existing WiFi sensing technologies and future WiFi sensing trends, this survey gives a comprehensive review of the signal processing techniques, algorithms, applications, and performance results of WiFi sensing with CSI. Different WiFi sensing algorithms and signal processing techniques have their own advantages and limitations and are suitable for different WiFi sensing applications. The survey groups CSI-based WiFi sensing applications into three categories, detection, recognition, and estimation, depending on whether the outputs are binary/multi-class classifications or numerical values. With the development and deployment of new WiFi technologies, there will be more WiFi sensing opportunities wherein the targets may go beyond from humans to environments, animals, and objects. The survey highlights three challenges for WiFi sensing: robustness and generalization, privacy and security, and coexistence of WiFi sensing and networking. Finally, the survey presents three future WiFi sensing trends, i.e., integrating cross-layer network information, multi-device cooperation, and fusion of different sensors, for enhancing existing WiFi sensing capabilities and enabling new WiFi sensing opportunities.

383 citations


Journal ArticleDOI
TL;DR: Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs as mentioned in this paper, and the main challenge in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period.
Abstract: Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.

305 citations


Journal ArticleDOI
TL;DR: A comparative analysis of the approaches from the reference of data pre-processing, algorithmic and hybrid paradigms for contemporary imbalance data analysis techniques, and their comparative study in lieu of different data distribution and their application areas are presented.
Abstract: In machine learning, the data imbalance imposes challenges to perform data analytics in almost all areas of real-world research. The raw primary data often suffers from the skewed perspective of data distribution of one class over the other as in the case of computer vision, information security, marketing, and medical science. The goal of this article is to present a comparative analysis of the approaches from the reference of data pre-processing, algorithmic and hybrid paradigms for contemporary imbalance data analysis techniques, and their comparative study in lieu of different data distribution and their application areas.

267 citations


Journal ArticleDOI
TL;DR: In this article, a survey of application layer communication protocols to fulfill the IoT communication requirements, and their potential for implementation in fog-and cloud-based IoT systems is presented, including request-reply and publish-subscribe protocols.
Abstract: The fast increment in the number of IoT (Internet of Things) devices is accelerating the research on new solutions to make cloud services scalable. In this context, the novel concept of fog computing as well as the combined fog-to-cloud computing paradigm is becoming essential to decentralize the cloud, while bringing the services closer to the end-system. This article surveys e application layer communication protocols to fulfill the IoT communication requirements, and their potential for implementation in fog- and cloud-based IoT systems. To this end, the article first briefly presents potential protocol candidates, including request-reply and publish-subscribe protocols. After that, the article surveys these protocols based on their main characteristics, as well as the main performance issues, including latency, energy consumption, and network throughput. These findings are thereafter used to place the protocols in each segment of the system (IoT, fog, cloud), and thus opens up the discussion on their choice, interoperability, and wider system integration. The survey is expected to be useful to system architects and protocol designers when choosing the communication protocols in an integrated IoT-to-fog-to-cloud system architecture.

256 citations


Journal ArticleDOI
TL;DR: The main goal of this work is to provide an intuitive understanding of the major techniques that have made a significant contribution to the image segmentation domain.
Abstract: The machine learning community has been overwhelmed by a plethora of deep learning--based approaches. Many challenging computer vision tasks, such as detection, localization, recognition, and segmentation of objects in an unconstrained environment, are being efficiently addressed by various types of deep neural networks, such as convolutional neural networks, recurrent networks, adversarial networks, and autoencoders. Although there have been plenty of analytical studies regarding the object detection or recognition domain, many new deep learning techniques have surfaced with respect to image segmentation techniques. This article approaches these various deep learning techniques of image segmentation from an analytical perspective. The main goal of this work is to provide an intuitive understanding of the major techniques that have made a significant contribution to the image segmentation domain. Starting from some of the traditional image segmentation approaches, the article progresses by describing the effect that deep learning has had on the image segmentation domain. Thereafter, most of the major segmentation algorithms have been logically categorized with paragraphs dedicated to their unique contribution. With an ample amount of intuitive explanations, the reader is expected to have an improved ability to visualize the internal dynamics of these processes.

231 citations


Journal ArticleDOI
TL;DR: This article summarises and categorises 100 state-of-the-art quality indicators and discusses issues regarding attributes that indicators possess and properties that indicators are desirable to have, in the hope of motivating researchers and practitioners to look into these important issues when designing quality indicators.
Abstract: Complexity and variety of modern multiobjective optimisation problems result in the emergence of numerous search techniques, from traditional mathematical programming to various randomised heuristics. A key issue raised consequently is how to evaluate and compare solution sets generated by these multiobjective search techniques. In this article, we provide a comprehensive review of solution set quality evaluation. Starting with an introduction of basic principles and concepts of set quality evaluation, this article summarises and categorises 100 state-of-the-art quality indicators, with the focus on what quality aspects these indicators reflect. This is accompanied in each category by detailed descriptions of several representative indicators and in-depth analyses of their strengths and weaknesses. Furthermore, issues regarding attributes that indicators possess and properties that indicators are desirable to have are discussed, in the hope of motivating researchers to look into these important issues when designing quality indicators and of encouraging practitioners to bear these issues in mind when selecting/using quality indicators. Finally, future trends and potential research directions in the area are suggested, together with some guidelines on these directions.

Journal ArticleDOI
TL;DR: An in-depth study of TrustZone technology is presented, providing a comprehensive survey of relevant work from academia and industry, presenting existing systems into two main areas, namely, Trusted Execution Environments and hardware-assisted virtualization.
Abstract: The world is undergoing an unprecedented technological transformation, evolving into a state where ubiquitous Internet-enabled “things” will be able to generate and share large amounts of security- and privacy-sensitive data. To cope with the security threats that are thus foreseeable, system designers can find in Arm TrustZone hardware technology a most valuable resource. TrustZone is a System-on-Chip and CPU system-wide security solution, available on today’s Arm application processors and present in the new generation Arm microcontrollers, which are expected to dominate the market of smart “things.” Although this technology has remained relatively underground since its inception in 2004, over the past years, numerous initiatives have significantly advanced the state of the art involving Arm TrustZone. Motivated by this revival of interest, this paper presents an in-depth study of TrustZone technology. We provide a comprehensive survey of relevant work from academia and industry, presenting existing systems into two main areas, namely, Trusted Execution Environments and hardware-assisted virtualization. Furthermore, we analyze the most relevant weaknesses of existing systems and propose new research directions within the realm of tiniest devices and the Internet of Things, which we believe to have potential to yield high-impact contributions in the future.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed publications as early as 1991, with 85% of the publications between 2013 and 2018, to identify and classify the architectures, infrastructure, and underlying algorithms for managing resources in fog/edge computing.
Abstract: Contrary to using distant and centralized cloud data center resources, employing decentralized resources at the edge of a network for processing data closer to user devices, such as smartphones and tablets, is an upcoming computing paradigm, referred to as fog/edge computing. Fog/edge resources are typically resource-constrained, heterogeneous, and dynamic compared to the cloud, thereby making resource management an important challenge that needs to be addressed. This article reviews publications as early as 1991, with 85% of the publications between 2013 and 2018, to identify and classify the architectures, infrastructure, and underlying algorithms for managing resources in fog/edge computing.

Journal ArticleDOI
TL;DR: A systematic review of the last 10 years of the literature on handwritten signatures with respect to the new scenario is reported, focusing on the most promising domains of research and trying to elicit possible future research directions in this subject.
Abstract: Handwritten signatures are biometric traits at the center of debate in the scientific community. Over the last 40 years, the interest in signature studies has grown steadily, having as its main reference the application of automatic signature verification, as previously published reviews in 1989, 2000, and 2008 bear witness. Ever since, and over the last 10 years, the application of handwritten signature technology has strongly evolved and much research has focused on the possibility of applying systems based on handwritten signature analysis and processing to a multitude of new fields. After several years of haphazard growth of this research area, it is time to assess its current developments for their applicability in order to draw a structured way forward. This perspective reports a systematic review of the last 10 years of the literature on handwritten signatures with respect to the new scenario, focusing on the most promising domains of research and trying to elicit possible future research directions in this subject.

Journal ArticleDOI
TL;DR: The present survey investigates and discusses DevOps challenges from the perspective of engineers, managers, and researchers, and develops a DevOps conceptual map, correlating the DevOps automation tools with these concepts.
Abstract: DevOpsis a collaborative and multidisciplinary organizational effort to automate continuous delivery of new software updates while guaranteeing their correctness and reliability. The present survey investigates and discusses DevOps challenges from the perspective of engineers, managers, and researchers. We review the literature and develop a DevOps conceptual map, correlating the DevOps automation tools with these concepts. We then discuss their practical implications for engineers, managers, and researchers. Finally, we critically explore some of the most relevant DevOps challenges reported by the literature.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the details of GANs for those readers who are familiar with, but do not comprehend GAN deeply or who wish to view GAN from various perspectives.
Abstract: Generative Adversarial Networks (GANs) have received wide attention in the machine learning field for their potential to learn high-dimensional, complex real data distribution. Specifically, they do not rely on any assumptions about the distribution and can generate real-like samples from latent space in a simple manner. This powerful property allows GANs to be applied to various applications such as image synthesis, image attribute editing, image translation, domain adaptation, and other academic fields. In this article, we discuss the details of GANs for those readers who are familiar with, but do not comprehend GANs deeply or who wish to view GANs from various perspectives. In addition, we explain how GANs operates and the fundamental meaning of various objective functions that have been suggested recently. We then focus on how the GAN can be combined with an autoencoder framework. Finally, we enumerate the GAN variants that are applied to various tasks and other fields for those who are interested in exploiting GANs for their research.

Journal ArticleDOI
TL;DR: This is the first comprehensive survey that targets the study of computing technology for the food area and also offers a collection of research studies and technologies to benefit researchers and practitioners working in different food-related fields.
Abstract: Food is essential for human life and it is fundamental to the human experience. Food-related study may support multifarious applications and services, such as guiding human behavior, improving human health, and understanding the culinary culture. With the rapid development of social networks, mobile networks, and Internet of Things (IoT), people commonly upload, share, and record food images, recipes, cooking videos, and food diaries, leading to large-scale food data. Large-scale food data offers rich knowledge about food and can help tackle many central issues of human society. Therefore, it is time to group several disparate issues related to food computing. Food computing acquires and analyzes heterogenous food data from different sources for perception, recognition, retrieval, recommendation, and monitoring of food. In food computing, computational approaches are applied to address food-related issues in medicine, biology, gastronomy, and agronomy. Both large-scale food data and recent breakthroughs in computer science are transforming the way we analyze food data. Therefore, a series of works has been conducted in the food area, targeting different food-oriented tasks and applications. However, there are very few systematic reviews that shape this area well and provide a comprehensive and in-depth summary of current efforts or detail open problems in this area. In this article, we formalize food computing and present such a comprehensive overview of various emerging concepts, methods, and tasks. We summarize key challenges and future directions ahead for food computing. This is the first comprehensive survey that targets the study of computing technology for the food area and also offers a collection of research studies and technologies to benefit researchers and practitioners working in different food-related fields.

Journal ArticleDOI
TL;DR: More autonomous end-to-end solutions need to be experimentally tested and developed while incorporating natural language ontology and dictionaries to automate complex task decomposition and leveraging big data advancements to improve perception algorithms for robotics.
Abstract: The emergence of the Internet of things and the widespread deployment of diverse computing systems have led to the formation of heterogeneous multi-agent systems (MAS) to complete a variety of tasks. Motivated to highlight the state of the art on existing MAS while identifying their limitations, remaining challenges, and possible future directions, we survey recent contributions to the field. We focus on robot agents and emphasize the challenges of MAS sub-fields including task decomposition, coalition formation, task allocation, perception, and multi-agent planning and control. While some components have seen more advancements than others, more research is required before effective autonomous MAS can be deployed in real smart city settings that are less restrictive than the assumed validation environments of MAS. Specifically, more autonomous end-to-end solutions need to be experimentally tested and developed while incorporating natural language ontology and dictionaries to automate complex task decomposition and leveraging big data advancements to improve perception algorithms for robotics.

Journal ArticleDOI
TL;DR: A comprehensive and up-to-date overview of existing methods used to dynamically analyze malware is provided, which includes a description of each method, its strengths and weaknesses, and its resilience against malware evasion techniques.
Abstract: Although malicious software (malware) has been around since the early days of computers, the sophistication and innovation of malware has increased over the years. In particular, the latest crop of ransomware has drawn attention to the dangers of malicious software, which can cause harm to private users as well as corporations, public services (hospitals and transportation systems), governments, and security institutions. To protect these institutions and the public from malware attacks, malicious activity must be detected as early as possible, preferably before it conducts its harmful acts. However, it is not always easy to know what to look for—especially when dealing with new and unknown malware that has never been seen. Analyzing a suspicious file by static or dynamic analysis methods can provide relevant and valuable information regarding a file's impact on the hosting system and help determine whether the file is malicious or not, based on the method's predefined rules. While various techniques (e.g., code obfuscation, dynamic code loading, encryption, and packing) can be used by malware writers to evade static analysis (including signature-based anti-virus tools), dynamic analysis is robust to these techniques and can provide greater understanding regarding the analyzed file and consequently can lead to better detection capabilities. Although dynamic analysis is more robust than static analysis, existing dynamic analysis tools and techniques are imperfect, and there is no single tool that can cover all aspects of malware behavior. The most recent comprehensive survey performed in this area was published in 2012. Since that time, the computing environment has changed dramatically with new types of malware (ransomware, cryptominers), new analysis methods (volatile memory forensics, side-channel analysis), new computing environments (cloud computing, IoT devices), new machine-learning algorithms, and more. The goal of this survey is to provide a comprehensive and up-to-date overview of existing methods used to dynamically analyze malware, which includes a description of each method, its strengths and weaknesses, and its resilience against malware evasion techniques. In addition, we include an overview of prominent studies presenting the usage of machine-learning methods to enhance dynamic malware analysis capabilities aimed at detection, classification, and categorization.

Journal ArticleDOI
TL;DR: Detailed descriptions are given on an R package named Extended Complexity Library (ECoL) that implements a set of complexity measures and is made publicly available that can be used to characterize the complexity of the respective classification problems.
Abstract: Characteristics extracted from the training datasets of classification problems have proven to be effective predictors in a number of meta-analyses. Among them, measures of classification complexity can be used to estimate the difficulty in separating the data points into their expected classes. Descriptors of the spatial distribution of the data and estimates of the shape and size of the decision boundary are among the known measures for this characterization. This information can support the formulation of new data-driven pre-processing and pattern recognition techniques, which can in turn be focused on challenges highlighted by such characteristics of the problems. This article surveys and analyzes measures that can be extracted from the training datasets to characterize the complexity of the respective classification problems. Their use in recent literature is also reviewed and discussed, allowing to prospect opportunities for future work in the area. Finally, descriptions are given on an R package named Extended Complexity Library (ECoL) that implements a set of complexity measures and is made publicly available.

Journal ArticleDOI
TL;DR: The state of the art in formal specification and verification for autonomous robotics is surveyed and the challenges posed by, the formalisms aimed at, and the formal approaches for the specification and verify of autonomous robotics are identified.
Abstract: Autonomous robotic systems are complex, hybrid, and often safety critical; this makes their formal specification and verification uniquely challenging. Though commonly used, testing and simulation alone are insufficient to ensure the correctness of, or provide sufficient evidence for the certification of, autonomous robotics. Formal methods for autonomous robotics have received some attention in the literature, but no resource provides a current overview. This article systematically surveys the state of the art in formal specification and verification for autonomous robotics. Specially, it identifies and categorizes the challenges posed by, the formalisms aimed at, and the formal approaches for the specification and verification of autonomous robotics.

Journal ArticleDOI
TL;DR: A state-of-the-art review of issues and challenges associated with existing load-balancing techniques for researchers to develop more effective algorithms is presented.
Abstract: With the growth in computing technologies, cloud computing has added a new paradigm to user services that allows accessing Information Technology services on the basis of pay-per-use at any time and any location. Owing to flexibility in cloud services, numerous organizations are shifting their business to the cloud and service providers are establishing more data centers to provide services to users. However, it is essential to provide cost-effective execution of tasks and proper utilization of resources. Several techniques have been reported in the literature to improve performance and resource use based on load balancing, task scheduling, resource management, quality of service, and workload management. Load balancing in the cloud allows data centers to avoid overloading/underloading in virtual machines, which itself is a challenge in the field of cloud computing. Therefore, it becomes a necessity for developers and researchers to design and implement a suitable load balancer for parallel and distributed cloud environments. This survey presents a state-of-the-art review of issues and challenges associated with existing load-balancing techniques for researchers to develop more effective algorithms.

Journal ArticleDOI
TL;DR: This survey gives a comprehensive review of state-of-the-art indoor localization methods and localization improvement methods using maps, spatial models, and landmarks.
Abstract: Indoor localization is essential for healthcare, security, augmented reality gaming, and many other location-based services. There is currently a wealth of relevant literature on indoor localization. This article focuses on recent advances in indoor localization methods that use spatial context to improve the location estimation. Spatial context in the form of maps and spatial models have been used to improve the localization by constraining location estimates in the navigable parts of indoor environments. Landmarks such as doors and corners, which are also one form of spatial context, have proved useful in assisting indoor localization by correcting the localization error. This survey gives a comprehensive review of state-of-the-art indoor localization methods and localization improvement methods using maps, spatial models, and landmarks.

Journal ArticleDOI
TL;DR: The architecture and design of CGRAs are reviewed thoroughly, a novel multidimensional taxonomy is proposed, and major challenges and the corresponding state-of-the-art techniques are surveyed and analyzed.
Abstract: As general-purpose processors have hit the power wall and chip fabrication cost escalates alarmingly, coarse-grained reconfigurable architectures (CGRAs) are attracting increasing interest from both academia and industry, because they offer the performance and energy efficiency of hardware with the flexibility of software. However, CGRAs are not yet mature in terms of programmability, productivity, and adaptability. This article reviews the architecture and design of CGRAs thoroughly for the purpose of exploiting their full potential. First, a novel multidimensional taxonomy is proposed. Second, major challenges and the corresponding state-of-the-art techniques are surveyed and analyzed. Finally, the future development is discussed.

Journal ArticleDOI
TL;DR: Video description is the automatic generation of natural language sentences that describe the contents of a given video as discussed by the authors, which has applications in human-robot interaction, helping the visually impaired and video subtitling.
Abstract: Video description is the automatic generation of natural language sentences that describe the contents of a given video. It has applications in human-robot interaction, helping the visually impaired and video subtitling. The past few years have seen a surge of research in this area due to the unprecedented success of deep learning in computer vision and natural language processing. Numerous methods, datasets, and evaluation metrics have been proposed in the literature, calling the need for a comprehensive survey to focus research efforts in this flourishing new direction. This article fills the gap by surveying the state-of-the-art approaches with a focus on deep learning models; comparing benchmark datasets in terms of their domains, number of classes, and repository size; and identifying the pros and cons of various evaluation metrics, such as SPICE, CIDEr, ROUGE, BLEU, METEOR, and WMD. Classical video description approaches combined subject, object, and verb detection with template-based language models to generate sentences. However, the release of large datasets revealed that these methods cannot cope with the diversity in unconstrained open domain videos. Classical approaches were followed by a very short era of statistical methods that were soon replaced with deep learning, the current state-of-the-art in video description. Our survey shows that despite the fast-paced developments, video description research is still in its infancy due to the following reasons: Analysis of video description models is challenging, because it is difficult to ascertain the contributions towards accuracy or errors of the visual features and the adopted language model in the final description. Existing datasets neither contain adequate visual diversity nor complexity of linguistic structures. Finally, current evaluation metrics fall short of measuring the agreement between machine-generated descriptions with that of humans. We conclude our survey by listing promising future research directions.

Journal ArticleDOI
TL;DR: This article studies privacy and security issues in IoT that require program-analysis techniques with an emphasis on identified attacks against these systems and defenses implemented so far and relates the efficacy of program- analysis techniques to security and privacy issues.
Abstract: Recent advances in Internet of Things (IoT) have enabled myriad domains such as smart homes, personal monitoring devices, and enhanced manufacturing. IoT is now pervasive—new applications are being used in nearly every conceivable environment, which leads to the adoption of device-based interaction and automation. However, IoT has also raised issues about the security and privacy of these digitally augmented spaces. Program analysis is crucial in identifying those issues, yet the application and scope of program analysis in IoT remains largely unexplored by the technical community. In this article, we study privacy and security issues in IoT that require program-analysis techniques with an emphasis on identified attacks against these systems and defenses implemented so far. Based on a study of five IoT programming platforms, we identify the key insights that result from research efforts in both the program analysis and security communities and relate the efficacy of program-analysis techniques to security and privacy issues. We conclude by studying recent IoT analysis systems and exploring their implementations. Through these explorations, we highlight key challenges and opportunities in calibrating for the environments in which IoT systems will be used.

Journal ArticleDOI
TL;DR: Fundamental concepts of urban computing leveraging Location-Based Social Networks data are discussed and a survey of recent urban computing studies that make use of LBSN data is presented.
Abstract: Urban computing is an emerging area of investigation in which researchers study cities using digital data. Location-Based Social Networks (LBSNs) generate one specific type of digital data that offers unprecedented geographic and temporal resolutions. We discuss fundamental concepts of urban computing leveraging LBSN data and present a survey of recent urban computing studies that make use of LBSN data. We also point out the opportunities and challenges that those studies open.

Journal ArticleDOI
TL;DR: This article analyses the pre-existing works to determine their role in Decentralised Cloud and future computing development and indicates which models represent a significant breakthrough from a Cloud perspective.
Abstract: Cloud computing emerged as a centralised paradigm that made “infinite” computing resources available on demand. Nevertheless, the ever-increasing computing capacities present on smart connected things and devices calls for the decentralisation of Cloud computing to avoid unnecessary latencies and fully exploit accessible computing capacities at the edges of the network. Whilst these decentralised Cloud models represent a significant breakthrough from a Cloud perspective, they are rooted in existing research areas such as Mobile Cloud Computing, Mobile Ad hoc Computing, and Edge computing. This article analyses the pre-existing works to determine their role in Decentralised Cloud and future computing development.

Journal ArticleDOI
TL;DR: This article investigates the problem of reliable resource provisioning in joint edge-cloud environments, and surveys technologies, mechanisms, and methods that can be used to improve the reliability of distributed applications in diverse and heterogeneous network environments.
Abstract: Large-scale software systems are currently designed as distributed entities and deployed in cloud data centers. To overcome the limitations inherent to this type of deployment, applications are increasingly being supplemented with components instantiated closer to the edges of networks—a paradigm known as edge computing. The problem of how to efficiently orchestrate combined edge-cloud applications is, however, incompletely understood, and a wide range of techniques for resource and application management are currently in use. This article investigates the problem of reliable resource provisioning in joint edge-cloud environments, and surveys technologies, mechanisms, and methods that can be used to improve the reliability of distributed applications in diverse and heterogeneous network environments. Due to the complexity of the problem, special emphasis is placed on solutions to the characterization, management, and control of complex distributed applications using machine learning approaches. The survey is structured around a decomposition of the reliable resource provisioning problem into three categories of techniques: workload characterization and prediction, component placement and system consolidation, and application elasticity and remediation. Survey results are presented along with a problem-oriented discussion of the state-of-the-art. A summary of identified challenges and an outline of future research directions are presented to conclude the article.

Journal ArticleDOI
TL;DR: The aim of this survey is to describe the main research directions, the most prominent known results, and some of the most challenging open problems inGraph Drawing Beyond Planarity.
Abstract: Graph Drawing Beyond Planarity is a rapidly growing research area that classifies and studies geometric representations of nonplanar graphs in terms of forbidden crossing configurations. The aim of this survey is to describe the main research directions in this area, the most prominent known results, and some of the most challenging open problems.