Conference
ACM SIGMM Conference on Multimedia Systems
About: ACM SIGMM Conference on Multimedia Systems is an academic conference. The conference publishes majorly in the area(s): Video quality & Quality of experience. Over the lifetime, 378 publication(s) have been published by the conference receiving 9431 citation(s).
Papers
More filters
[...]
23 Feb 2011
TL;DR: In this paper, some insight and background into the Dynamic Adaptive Streaming over HTTP (DASH) specifications as available from 3GPP and in draft version also from MPEG is provided.
Abstract: In this paper, we provide some insight and background into the Dynamic Adaptive Streaming over HTTP (DASH) specifications as available from 3GPP and in draft version also from MPEG. Specifically, the 3GPP version provides a normative description of a Media Presentation, the formats of a Segment, and the delivery protocol. In addition, it adds an informative description on how a DASH Client may use the provided information to establish a streaming service for the user. The solution supports different service types (e.g., On-Demand, Live, Time-Shift Viewing), different features (e.g., adaptive bitrate switching, multiple language support, ad insertion, trick modes, DRM) and different deployment options. Design principles and examples are provided.
1,138 citations
[...]
TL;DR: This paper focuses on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluates two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF).
Abstract: Adaptive (video) streaming over HTTP is gradually being adopted, as it offers significant advantages in terms of both user-perceived quality and resource utilization for content and network service providers. In this paper, we focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF). Our experiments cover three important operating conditions. First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth. Can the player quickly converge to the maximum sustainable bitrate? Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner? And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay? We identify major differences between the three players, and significant inefficiencies in each of them.
713 citations
[...]
TL;DR: A receiver-driven rate adaptation method for HTTP/TCP streaming that deploys a step-wise increase/ aggressive decrease method to switch up/down between the different representations of the content that are encoded at different bitrates is presented.
Abstract: Recently, HTTP has been widely used for the delivery of real-time multimedia content over the Internet, such as in video streaming applications. To combat the varying network resources of the Internet, rate adaptation is used to adapt the transmission rate to the varying network capacity. A key research problem of rate adaptation is to identify network congestion early enough and to probe the spare network capacity. In adaptive HTTP streaming, this problem becomes challenging because of the difficulties in differentiating between the short-term throughput variations, incurred by the TCP congestion control, and the throughput changes due to more persistent bandwidth changes.In this paper, we propose a novel rate adaptation algorithm for adaptive HTTP streaming that detects bandwidth changes using a smoothed HTTP throughput measured based on the segment fetch time (SFT). The smoothed HTTP throughput instead of the instantaneous TCP transmission rate is used to determine if the bitrate of the current media matches the end-to-end network bandwidth capacity. Based on the smoothed throughput measurement, this paper presents a receiver-driven rate adaptation method for HTTP/TCP streaming that deploys a step-wise increase/ aggressive decrease method to switch up/down between the different representations of the content that are encoded at different bitrates. Our rate adaptation method does not require any transport layer information such as round trip time (RTT) and packet loss rates which are available at the TCP layer. Simulation results show that the proposed rate adaptation algorithm quickly adapts to match the end-to-end network capacity and also effectively controls buffer underflow and overflow.
428 citations
[...]
TL;DR: How RAISE has been collected and organized is described, how digital image forensics and many other multimedia research areas may benefit of this new publicly available benchmark dataset and a very recent forensic technique for JPEG compression detection is tested.
Abstract: Digital forensics is a relatively new research area which aims at authenticating digital media by detecting possible digital forgeries. Indeed, the ever increasing availability of multimedia data on the web, coupled with the great advances reached by computer graphical tools, makes the modification of an image and the creation of visually compelling forgeries an easy task for any user. This in turns creates the need of reliable tools to validate the trustworthiness of the represented information. In such a context, we present here RAISE, a large dataset of 8156 high-resolution raw images, depicting various subjects and scenarios, properly annotated and available together with accompanying metadata. Such a wide collection of untouched and diverse data is intended to become a powerful resource for, but not limited to, forensic researchers by providing a common benchmark for a fair comparison, testing and evaluation of existing and next generation forensic algorithms. In this paper we describe how RAISE has been collected and organized, discuss how digital image forensics and many other multimedia research areas may benefit of this new publicly available benchmark dataset and test a very recent forensic technique for JPEG compression detection.
311 citations
[...]
TL;DR: A Quality Adaptation Controller for live adaptive video streaming designed by employing feedback control theory and found to be able to throttle the video quality to match the available bandwidth with a transient of less than 30s while ensuring a continuous video reproduction.
Abstract: Multimedia content feeds an ever increasing fraction of the Internet traffic. Video streaming is one of the most important applications driving this trend. Adaptive video streaming is a relevant advancement with respect to classic progressive download streaming such as the one employed by YouTube. It consists in dynamically adapting the content bitrate in order to provide the maximum Quality of Experience, given the current available bandwidth, while ensuring a continuous reproduction. In this paper we propose a Quality Adaptation Controller (QAC) for live adaptive video streaming designed by employing feedback control theory. An experimental comparison with Akamai adaptive video streaming has been carried out. We have found the following main results: 1) QAC is able to throttle the video quality to match the available bandwidth with a transient of less than 30s while ensuring a continuous video reproduction; 2) QAC fairly shares the available bandwidth both in the cases of a concurrent TCP greedy connection or a concurrent video streaming flow; 3) Akamai underutilizes the available bandwidth due to the conservativeness of its heuristic algorithm; moreover, when abrupt available bandwidth reductions occur, the video reproduction is affected by interruptions.
230 citations