scispace - formally typeset
Search or ask a question

Showing papers by "Christian Timmerer published in 2014"


Journal ArticleDOI
TL;DR: A historic perspective on mulsemedia work is presented and current developments in the area are reviewed and standardization efforts, via the MPEG-V standard, are described.
Abstract: Mulsemedia—multiple sensorial media—captures a wide variety of research efforts and applications This article presents a historic perspective on mulsemedia work and reviews current developments in the area These take place across the traditional multimedia spectrum—from virtual reality applications to computer games—as well as efforts in the arts, gastronomy, and therapy, to mention a few We also describe standardization efforts, via the MPEG-V standard, and identify future developments and exciting challenges the community needs to overcome

153 citations


Journal ArticleDOI
TL;DR: This article investigates the implementation of adaptive multimedia streaming within networks adopting the ICN approach and presents the approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and theICN representative Content-Centric Networking.
Abstract: ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.

63 citations


Proceedings ArticleDOI
20 Nov 2014
TL;DR: This paper provides a detailed overview of the crowdsourcing frameworks and evaluates them to aid researchers in the field of QoE assessment in the selection of frameworks and crowdsourcing platforms that are adequate for their experiments.
Abstract: The popularity of the crowdsourcing for performing various tasks online increased significantly in the past few years. The low cost and flexibility of crowdsourcing, in particular, attracted researchers in the field of subjective multimedia evalua- tions and Quality of Experience (QoE). Since online assessment of multimedia content is challenging, several dedicated frameworks were created to aid in the designing of the tests, including the support of the testing methodologies like ACR, DCR, and PC, setting up the tasks, training sessions, screening of the subjects, and storage of the resulted data. In this paper, we focus on the web-based frameworks for multimedia quality assessments that support commonly used crowdsourcing platforms such as Amazon Mechanical Turk and Microworkers. We provide a detailed overview of the crowdsourcing frameworks and evaluate them to aid researchers in the field of QoE assessment in the selection of frameworks and crowdsourcing platforms that are adequate for their experiments.

43 citations


Book ChapterDOI
01 Jan 2014
TL;DR: This chapter introduces the concept of Sensory Experience which aims to define the Quality of Experience going beyond audio-visual content, and utilizes a standardized representation format for sensory effects that are attached to traditional multimedia resources such as audio, video, and image contents.
Abstract: This chapter introduces the concept of Sensory Experience which aims to define the Quality of Experience (QoE) going beyond audio-visual content. In particular, we show how to utilize sensory effects such as ambient light, scent, wind, or vibration as additional dimensions contributing to the quality of the user experience. Therefore, we utilize a standardized representation format for sensory effects that are attached to traditional multimedia resources such as audio, video, and image contents. Sensory effects are rendered on special devices (e.g., fans, lights, motion chair, scent emitter) in synchronization with the traditional multimedia resources and shall stimulate also other senses than hearing and seeing with the intention to increase the Quality of Experience (QoE), in this context referred to as Sensory Experience.

33 citations


Proceedings ArticleDOI
03 Nov 2014
TL;DR: This work introduces self-organized Inter-Destination Multimedia Synchronization for adaptive media streaming to synchronize multimedia playback among geographically distributed peers and proposes a Distributed Control Scheme (DCS) to negotiate a reference playback timestamp among the peers participating in an IDMS session.
Abstract: As social networks have become more pervasive, they have changed how we interact socially. The traditional TV experience has drifted from an event at a fixed location with family or friends to a location-independent and distributed social experience. In addition, more and more Video On-Demand services have adopted pull-based streaming. In order to provide a synchronized and immersive distributed TV experience we introduce self-organized Inter-Destination Multimedia Synchronization (IDMS) for adaptive media streaming. In particular, we adapt the principles of IDMS to MPEG-DASH to synchronize multimedia playback among geographically distributed peers. We introduce session management to MPEG-DASH and propose a Distributed Control Scheme (DCS) to negotiate a reference playback timestamp among the peers participating in an IDMS session. We evaluate our DCS with respect to scalability and the time required to negotiate the reference playback timestamp. Furthermore, we investigate how to compensate for asynchronism using Adaptive Media Playout (AMP) and define a temporal distortion metric for audio and video which allows the impact of playback rate variations to be modeled with respect to QoE. This metric is evaluated based on a subjective quality assessment using crowdsourcing.

25 citations


Proceedings ArticleDOI
02 Dec 2014
TL;DR: To compare the QoE performance of existing DASH-based Web clients within real-world environments using crowdsourcing, Google's YouTube player and two open source implementations of the MPEG-DASH standard are selected and a crowdsourcing campaign is run to determine theQoE of each implementation.
Abstract: Multimedia streaming over HTTP has gained momentum with the approval of the MPEG-DASH standard and many research papers evaluated various aspects thereof but mainly within controlled environments. However, the actual behaviour of a DASH client within real-world environments has not yet been evaluated. The aim of this paper is to compare the QoE performance of existing DASH-based Web clients within real-world environments using crowdsourcing. Therefore, we select Google's YouTube player and two open source implementations of the MPEG-DASH standard, namely the DASH-JS from Alpen-Adria-Universitaet Klagenfurt and the dash.js which is the official reference client of the DASH Industry Forum. Based on a predefined content configuration, which is comparable among the clients, we run a crowdsourcing campaign to determine the QoE of each implementation in order to determine the current state-of-the-art for MPEG-DASH systems within real-world environments. The gathered data and its analysis will be presented in the paper. It provides insights with respect to the QoE performance of current Web-based adaptive HTTP streaming systems.

21 citations


Journal ArticleDOI
TL;DR: A Web browser plug-in for Mozilla Firefox is described which is able to render sensory effects, such as light, in the area of the WWW and their results achieved.
Abstract: More and more content in various formats becomes available via the World Wide Web (WWW). Currently available Web browsers are able to access and interpret these contents (i.e., Web videos, text, image, and audio). These contents stimulate only senses like audition or vision. Recently, it has been proposed to stimulate also other senses while consuming multimedia content, through so-called sensory effects. These sensory effects aim to enhance the ambient experience by providing effects such as light, wind, vibration, etc. The effects are represented as Sensory Effect Metadata (SEM) description which is associated to multimedia content and is rendered on devices like fans, vibration chairs, or lamps. In this paper we present two subjective quality assessments which comprise sensory effects, such as light, in the area of the WWW and their results achieved. The first assessment evaluates the influence of light effects on the Quality of Experience (QoE). The second assessment measures the impact of different settings for the color calculation on the viewing experience. Furthermore, we describe a Web browser plug-in for Mozilla Firefox which is able to render such sensory effects that are provided via the WWW.

18 citations


Journal ArticleDOI
TL;DR: Crowdsourced quality-of-experience assessments are more cost-effective and flexible than traditional in-lab evaluations but require careful test design, innovative incentive mechanisms, and technical expertise to address various implementation challenges.
Abstract: Crowdsourced quality-of-experience (QoE) assessments are more cost-effective and flexible than traditional in-lab evaluations but require careful test design, innovative incentive mechanisms, and technical expertise to address various implementation challenges.

18 citations


Proceedings ArticleDOI
02 Dec 2014
TL;DR: The MyMedia system is described, that allows users to search, share and experience videos and live recordings using P2P and at the best quality possible with respect to available network capacity, and is available as open-source software for the Android operating system.
Abstract: Mobile peer-to-peer (P2P) computing with applications such as for video on demand, file sharing, and video conferencing is gaining momentum based on new standards and technologies such as IETF PPSP, WiFi-Direct and BitTorrent live streaming. In this paper, we describe the mobile system MyMedia, that allows users to search, share and experience videos and live recordings using P2P and at the best quality possible with respect to available network capacity. In particular, the MyMedia system features a high-precision semantic P2P search and dynamic network-adaptive P2P live streaming of MPEG videos over HTTP based on the ISO/IEC standard MPEG-DASH from mobile to mobile devices in unstructured wireless P2P networks. Experimental evaluation of these features in terms of performance and energy consumption, and a first, limited evaluation of user acceptance at a film festival showed that the MyMedia system is suitable for its purpose. The MyMedia system is available as open-source software for the Android operating system.

15 citations


Proceedings ArticleDOI
15 Dec 2014
TL;DR: A utility model is derived that allows estimating the QoE of AMP with respect to non-periodically and randomly selected content sections of a video sequence by using crowdsourcing and introduces metrics that allow to quantify the distortion for audio and video that are caused by increasing or decreasing the playback rate.
Abstract: In the past decade Adaptive Media Playout (AMP) has been intensively studied with respect to the detection of when to increase or decrease the playback rate in order to maintain a certain buffer fill state. In this paper we subjectively assess the QoE of AMP with respect to non-periodically and randomly selected content sections of a video sequence by using crowdsourcing. Furthermore, we introduce metrics that allow to quantify the distortion for audio and video that are caused by increasing or decreasing the playback rate. With these preliminaries we study the correlation between the introduced metrics and the subjectively assessed QoE. Therefore, we derive a utility model that allows estimating the QoE with the introduced metrics. We instantiate and validate the model using the data gathered from the conducted study.

14 citations


Patent
18 Aug 2014
TL;DR: In this paper, an apparatus (100) comprises an interface for receiving media information, wherein the media information indicates a segment data rate for each of a plurality of media data segments and further indicates a quality value for the plurality of data segments.
Abstract: An apparatus (100) is provided. The apparatus (100) comprises an interface for receiving media information, wherein the media information indicates a segment data rate for each of a plurality of media data segments and further indicates a quality value for each of the plurality of media data segments. Moreover, the apparatus (100) comprises a processor for selecting one or more selected segments from the plurality of the media data segments depending on the segment data rates of the plurality of media data segments, depending on the quality values of the plurality of media data segments and depending on an available data rate of a communication resource. The interface is configured to transmit a request requesting the one or more selected segments. Moreover, the interface is configured to receive the one or more selected segments being transmitted on the communication resource.

Journal ArticleDOI
TL;DR: The results of such a subjective quality assessment of audio-visual sequences which are annotated with additional sensory effects such as ambient light, wind, and vibration using the MPEG-V standard allow a utility model representing the Quality of Sensory Experience (QuaSE) complementary to existing QoE models described in terms of Quality of Service (QoS) parameters.
Abstract: Current QoE research is mainly focusing on single modalities (audio, visual) or combinations thereof. In our research, we propose annotating traditional multimedia content with additional sensory effects, such as ambient light, vibration, wind, and olfaction, which could potentially stimulate all human senses. Investigating the influence of individual sensory effects and combinations thereof is important in order to understand how these individual sensory effects influence the Quality of Experience (QoE) as a whole. In this article, we describe the results of such a subjective quality assessment of audio-visual sequences which are annotated with additional sensory effects such as ambient light, wind, and vibration using the MPEG-V standard. The results of this assessment allow us to derive a utility model representing the Quality of Sensory Experience (QuaSE) complementary to existing QoE models described in terms of Quality of Service (QoS) parameters. For validating our proposed utility model, we provide an example instantiation and validate it against results of subjective quality assessments.

Patent
16 Jul 2014
TL;DR: In this paper, an apparatus (100) for transmitting user data to a server system (200) comprising one or more servers (210, 220) is provided, where the content encoder (110) is configured to encode each of the plurality of portions with a bandwidth-dependent quality which depends on a bandwidth that is available for transmitting the first data stream from the apparatus to the server system.
Abstract: An apparatus (100) for transmitting user data to a server system (200) comprising one or more servers (210, 220) is provided. The apparatus (100) comprises a content encoder (110) for encoding a plurality of portions of the user data to obtain a first data stream, wherein the content encoder (110) is configured to encode each of the plurality of portions with a bandwidth-dependent quality which depends on a bandwidth that is available for transmitting the first data stream from the apparatus (100) to the server system (200). Moreover, the apparatus (100) comprises a transmitter (120) for transmitting the first data stream from the apparatus (100) to the server system (200). The content encoder (110) is configured to encode two or more of said plurality of portions of the user data to obtain a second data stream, wherein the content encoder (110) is configured to encode each of said two or more of said plurality of portions with a predefined quality, wherein the bandwidth-dependent quality of one or more of the portions being encoded within the first data stream is lower than the predefined quality. The transmitter (120) is configured to transmit the second data stream from the apparatus (100) to the server system (200).

Proceedings ArticleDOI
05 May 2014
TL;DR: The impact of randomly selecting content sections for adapting the playout rate compared to the approach that exploits audio-visual features of the content in order to minimize the impact on the QoE is evaluated.
Abstract: Inter-Destination Multimedia Synchronization (IDMS) pushes social interactions to a new level. IDMS allows the users to experience multimedia together with friends, colleagues, or the family while having a real-time communication at the same time. The actual challenge of synchronizing the playout of each participant to a reference playout time is a tough task in terms of Quality of Experience (QoE). A possible solution for carrying out the synchronization is Adaptive Media Playout (AMP) where the playout speed of the multimedia is increased or decreased. In this paper we evaluate the impact of the playout variations on the QoE by adopting a crowdsourcing approach. In particular, we investigate the impact of randomly selecting content sections for adapting the playout rate compared to our approach that exploits audio-visual features of the content in order to minimize the impact on the QoE.

Journal ArticleDOI
TL;DR: Inter-destination multimedia synchronization and quality of experience are critical to the success of social TV, which integrates television viewing with social networking.
Abstract: Inter-destination multimedia synchronization and quality of experience are critical to the success of social TV, which integrates television viewing with social networking.

Proceedings ArticleDOI
03 Nov 2014
TL;DR: The goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard.
Abstract: In this tutorial we present state of the art and challenges ahead in over-the-top content delivery. It particular, the goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard. The main focus of the tutorial will be on the common problems in HAS deployments such as client design, QoE optimization, multi-screen and hybrid delivery scenarios, and synchronization issues. For each problem, we will examine proposed solutions along with their pros and cons. In the last part of the tutorial, we will look into the open issues and review the work-in-progress and future research directions.

Proceedings ArticleDOI
03 Nov 2014
TL;DR: This work offers the possibility to share events live with friends and colleagues through semantic search in unstructured peer-to-peer (P2P) networks for querying content in mobile ad hoc networks and dynamic adaptive streaming over HTTP for the actual delivery of the real-time media impressions.
Abstract: With the introduction of social networks like Facebook, Google+, and Twitter, the ways of sharing impressions of events has changed. We try to go a step further than social networks do. We offer the possibility to share events live with friends and colleagues. Our approach is based on semantic search in unstructured peer-to-peer (P2P) networks for querying content in mobile ad hoc networks and dynamic adaptive streaming over HTTP for the actual delivery of the real-time media impressions.

Journal ArticleDOI
TL;DR: This special issue is concerned with the latest developments in state-of-the-art adaptive media streaming technologies and applications.
Abstract: This special issue is concerned with the latest developments in state-of-the-art adaptive media streaming technologies and applications.

Journal ArticleDOI
01 Jan 2014
TL;DR: Eight articles at the forefront of mulsemedia research are accepted, including one that explores the impact that perceiving scents with different degrees of pleasantness has on users, as measured by their electroencephalogram (EEG) activity.
Abstract: Multimedia applications have primarily engaged two of the human senses – sight and hearing. With recent advances in computational technology, however, it is possible to develop applications that also consider, integrate, and synchronize inputs across all senses including the tactile, olfaction, and gustatory senses. This integration of multiple senses leads to a paradigm shift towards a new mulsemedia (multiple sensorial media) experience, aligning rich data from multiple human senses. Mulsemedia brings with new and exciting challenges and opportunities in research, industry, commerce, and academia. There is little doubt that mulsemedia research and applications represent an incipient, but growing, interest to the stakeholder community. We are excited to have accepted eight articles at the forefront of mulsemedia research for this special issue. In “Multimodal Hand and Foot Gesture Interaction for Handheld Devices” Lv et al. present a multimodal interaction smartphone game based on hand and foot interaction, providing coordinated output to both modalities in addition to audio, video, and vibro-tactile feedback. A user study evaluating the proof of concept application showed encouraging results, highlighting its feasibility and potential. The use of vibro-tactile feedback to complement possibly saturated audio and visual channels is also the focus of “Designing Vibrotactile Codes to Communicate Verb Phrases” by Prasad et al. Here, the results of two user studies explore the ability of users to understand information presented through tactile means. Although employing the haptic channel to this end (i.e., using haptic codes) was inconclusive, this work nonetheless has laid a foundation for some undoubtedly interesting future studies. Olfaction is another modality whose use in multimedia presentations is increasing and, in “Multiple-Scent Enhanced Multimedia Synchronization”, Murray et al. explore the issue of synchronisation when multimedia video is enhanced with two sequential olfactory streams. Specifically, they investigate the impact that the delay and jitter between two olfactory streams has on user quality of experience. The authors identify, among other noteworthy results, a minimum gap of 20s between two consecutive olfactory inputs when used in mulsemedia presentations. Olfaction-enhanced mulsemedia is also the theme of the invited article of the special issue, “EEG Correlates of Pleasant and Unpleasant Odor Perception” by Kroupi et al. The authors explore the impact that perceiving scents with different degrees of pleasantness has on users, as measured by their electroencephalogram (EEG) activity. Given the fact that there are no primary odors (akin to the primary colours of red, green, and blue), it is unsurprising that designing a general odor classifier based on user EEG responses is a challenge. In their work, however, they show that the design of subject-specific odor classifiers, based on EEG responses, is entirely possible. All of the Special Issue articles involve, in one way or another, user studies. Indeed, user acceptance is, one could argue, the key metric for mulsemedia applications’ success and proliferation. Evaluating the mulsemedia experience is the topic of “A Generic Utility Model Representing the Quality of Sensory Experience” by Rainer and Timmerer. In it, the authors report on the results of a user study which looked at how sensorial effects, such as light, wind, and vibration, can combine in mulsemedia presentations.

01 Jan 2014
TL;DR: A detailed introduction into emerging protocols (HTTP/2.0 and beyond) to be used in the context of adaptive media streaming, specifically DASH is provided.
Abstract: The emerging MPEG standard Dynamic Adaptive Streaming over HTTP (MPEG-DASH) is designed for media delivery over the top of existing infrastructures and enables smooth multimedia streaming towards heterogeneous devices including both wired and wireless environments. The MPEG-DASH standard was designed to work with HTTP-URLs but mandates neither the actual version nor which underlying protocols to be used. This paper will provide a detailed introduction into emerging protocols (HTTP/2.0 and beyond) to be used in the context of adaptive media streaming, specifically DASH.