scispace - formally typeset
Search or ask a question
Author

Joan Serrat

Bio: Joan Serrat is an academic researcher from Polytechnic University of Catalonia. The author has contributed to research in topics: Network management & The Internet. The author has an hindex of 33, co-authored 189 publications receiving 5250 citations. Previous affiliations of Joan Serrat include Waterford Institute of Technology & Autonomous University of Barcelona.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors survey the state-of-the-art in NFV and identify promising research directions in this area, and also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.
Abstract: Network function virtualization (NFV) has drawn significant attention from both industry and academia as an important shift in telecommunication service provisioning. By decoupling network functions (NFs) from the physical devices on which they run, NFV has the potential to lead to significant reductions in operating expenses (OPEX) and capital expenses (CAPEX) and facilitate the deployment of new services with increased agility and faster time-to-value. The NFV paradigm is still in its infancy and there is a large spectrum of opportunities for the research community to develop new architectures, systems and applications, and to evaluate alternatives and trade-offs in developing technologies for its successful deployment. In this paper, after discussing NFV and its relationship with complementary fields of software defined networking (SDN) and cloud computing, we survey the state-of-the-art in NFV, and identify promising research directions in this area. We also overview key NFV projects, standardization efforts, early implementations, use cases, and commercial products.

1,634 citations

Journal ArticleDOI
TL;DR: This paper provides a detailed definition of the problem, analyzing how new trends such as software defined networking and network function virtualization can assist in the slicing, and describes some research challenges on this topic.
Abstract: New architectural and design approaches for radio access networks have appeared with the introduction of network virtualization in the wireless domain. One of these approaches splits the wireless network infrastructure into isolated virtual slices under their own management, requirements, and characteristics. Despite the advances in wireless virtualization, there are still many open issues regarding the resource allocation and isolation of wireless slices. Because of the dynamics and shared nature of the wireless medium, guaranteeing that the traffic on one slice will not affect the traffic on the others has proven to be difficult. In this paper, we focus on the detailed definition of the problem, discussing its challenges. We also provide a review of existing works that deal with the problem, analyzing how new trends such as software defined networking and network function virtualization can assist in the slicing. We will finally describe some research challenges on this topic.

344 citations

Proceedings ArticleDOI
13 Apr 2015
TL;DR: This paper formulates the online virtual function mapping and scheduling problem and proposes a set of algorithms for solving it and proposes three greedy algorithms and a tabu search-based heuristic.
Abstract: Network function virtualization has received attention from both academia and industry as an important shift in the deployment of telecommunication networks and services. It is being proposed as a path towards cost efficiency, reduced time-to-markets, and enhanced innovativeness in telecommunication service provisioning. However, efficiently running virtualized services is not trivial as, among other initialization steps, it requires first mapping virtual networks onto physical networks, and thereafter mapping and scheduling virtual functions onto the virtual networks. This paper formulates the online virtual function mapping and scheduling problem and proposes a set of algorithms for solving it. Our main objective is to propose simple algorithms that may be used as a basis for future work in this area. To this end, we propose three greedy algorithms and a tabu search-based heuristic. We carry out evaluations of these algorithms considering parameters such as successful service mappings, total service processing times, revenue, cost etc, under varying network conditions. Simulations show that the tabu search-based algorithm performs only slightly better than the best greedy algorithm.

287 citations

Journal ArticleDOI
TL;DR: This article introduces NFV and gives an overview of the MANO framework that has been proposed by ETSI, and presents representative projects and vendor products that focus on MANO, and discusses their features and relationship with the framework.
Abstract: NFV continues to draw immense attention from researchers in both industry and academia. By decoupling NFs from the physical equipment on which they run, NFV promises to reduce CAPEX and OPEX, make networks more scalable and flexible, and lead to increased service agility. However, despite the unprecedented interest it has gained, there are still obstacles that must be overcome before NFV can advance to reality in industrial deployments, let alone delivering on the anticipated gains. While doing so, important challenges associated with network and function MANO need to be addressed. In this article, we introduce NFV and give an overview of the MANO framework that has been proposed by ETSI. We then present representative projects and vendor products that focus on MANO, and discuss their features and relationship with the framework. Finally, we identify open MANO challenges as well as opportunities for future research.

277 citations

Journal ArticleDOI
TL;DR: This paper reviews principal characterizations of ridge/valley with respect to a list of desirable properties and their purpose in the context of representative image analysis tasks and proposes a new one.
Abstract: Ridges and valleys are useful geometric features for image analysis. Different characterizations have been proposed to formalize the intuitive notion of ridge/valley. In this paper, we review their principal characterizations and propose a new one. Subsequently, we evaluate these characterizations with respect to a list of desirable properties and their purpose in the context of representative image analysis tasks.

226 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work addresses the task of semantic image segmentation with Deep Learning and proposes atrous spatial pyramid pooling (ASPP), which is proposed to robustly segment objects at multiple scales, and improves the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models.
Abstract: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

11,856 citations

Posted Content
TL;DR: DeepLab as discussed by the authors proposes atrous spatial pyramid pooling (ASPP) to segment objects at multiple scales by probing an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views.
Abstract: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

10,120 citations

Journal ArticleDOI
TL;DR: A new superpixel algorithm is introduced, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels and is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Abstract: Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

7,849 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Posted Content
TL;DR: This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF).
Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called "semantic image segmentation"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our "DeepLab" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.

3,389 citations