scispace - formally typeset
Search or ask a question
Author

Alexander Raake

Bio: Alexander Raake is an academic researcher from Technische Universität Ilmenau. The author has contributed to research in topics: Video quality & Quality of experience. The author has an hindex of 31, co-authored 244 publications receiving 4159 citations. Previous affiliations of Alexander Raake include MediaTech Institute & École Polytechnique Fédérale de Lausanne.


Papers
More filters
12 Mar 2013
TL;DR: The concepts and ideas cited in this paper mainly refer to the Quality of Experience of multimedia communication systems, but may be helpful also for other areas where QoE is an issue, and the document will not reflect the opinion of each individual person at all points.
Abstract: This White Paper is a contribution of the European Network on Quality of Experience in Multimedia Systems and Services, Qualinet (COST Action IC 1003, see www.qualinet.eu), to the scientific discussion about the term "Quality of Experience" (QoE) and its underlying concepts. It resulted from the need to agree on a working definition for this term which facilitates the communication of ideas within a multidisciplinary group, where a joint interest around multimedia communication systems exists, however approached from different perspectives. Thus, the concepts and ideas cited in this paper mainly refer to the Quality of Experience of multimedia communication systems, but may be helpful also for other areas where QoE is an issue. The Network of Excellence (NoE) Qualinet aims at extending the notion of network-centric Quality of Service (QoS) in multimedia systems, by relying on the concept of Quality of Experience (QoE). The main scientific objective is the development of methodologies for subjective and objective quality metrics taking into account current and new trends in multimedia communication systems as witnessed by the appearance of new types of content and interactions. A substantial scientific impact on fragmented efforts carried out in this field will be achieved by coordinating the research of European experts under the catalytic COST umbrella. The White Paper has been compiled on the basis of a first open call for ideas which was launched for the February 2012 Qualinet Meeting held in Prague, Czech Republic. The ideas were presented as short statements during that meeting, reflecting the ideas of the persons listed under the headline "Contributors" in the previous section. During the Prague meeting, the ideas have been further discussed and consolidated in the form of a general structure of the present document. An open call for authors was issued at that meeting, to which the persons listed as "Authors" in the previous section have announced their willingness to contribute in the preparation of individual sections. For each section, a coordinating author has been assigned which coordinated the writing of that section, and which is underlined in the author list preceding each section. The individual sections were then integrated and aligned by an editing group (listed as "Editors" in the previous section), and the entire document was iterated with the entire group of authors. Furthermore, the draft text was discussed with the participants of the Dagstuhl Seminar 12181 "Quality of Experience: From User Perception to Instrumental Metrics" which was held in Schlos Dagstuhl, Germany, May 1-4 2012, and a number of changes were proposed, resulting in the present document. As a result of the writing process and the large number of contributors, authors and editors, the document will not reflect the opinion of each individual person at all points. Still, we hope that it is found to be useful for everybody working in the field of Quality of Experience of multimedia communication systems, and most probably also beyond that field.

686 citations

Book
01 Jan 2006
TL;DR: This work focuses on the development of an E-model for Speech Quality in Telephony, which automates the very labor-intensive and therefore time-heavy and expensive process of modeling speech quality through simulation.
Abstract: Preface. List of Abbreviations. Introduction. 1 Speech Quality in Telephony. 1.1 Speech. 1.2 Speech Quality. 2 Speech Quality Measurement Methods. 2.1 Auditory Methods. 2.2 Instrumental Methods. 2.3 Speech Quality Measurement Methods: Summary. 3 Quality Elements and Quality Features of VoIP. 3.1 Speech Transmission Using Internet Protocol. 3.2 Overview of Quality Elements. 3.3 Quality Elements and Related Features. 3.4 Quality Dimensions. 3.5 Combined Elements and Combined Features. 3.6 Listening and Conversational Features. 3.7 Desired Nature. 3.8 Open Questions. 3.9 From Elements to Features: Modeling VoIP Speech Quality. 3.10 Quality Elements and Quality Features of VoIP: Summary. 4 Time-Varying Distortion: Quality Features and Modeling. 4.1 Microscopic Loss Behavior. 4.2 Macroscopic Loss Behavior. 4.3 Interactivity. 4.4 Packet Loss and Combined Impairments. 4.5 Time-Varying Distortion: Summary. 5 Wideband Speech, Linear and Non Linear Distortion: Quality Features and Modeling. 5.1 Wideband Speech: Improvement Over Narrowband. 5.2 Bandpass-Filtered Speech. 5.3 Wideband Codecs. 5.4 Desired Nature. 6 From Elements to Features: Extensions of the E-model. 6.1 E-model: Packet Loss. 6.2 E-model: Additivity. 6.3 E-model: Wideband, Linear and Non-Linear Distortion. 7 Summary and Conclusions. 8 Outlook. A Aspects of a Parametric Description of Time-Varying Distortion. B Simulation of Quality Elements. C Frequency Responses. D Test Data Normalization and Transformation. E E-model Algorithm. F Interactive Short Conversation Test Scenarios (iSCTs). G Auditory Test Settings and Results. H Modeling Details. I Glossary. Bibliography. Index.

211 citations

Journal ArticleDOI
TL;DR: A direct comparison between four scales, which are either included in existing international standards or proposed to be used in future standardization activities related to video quality, and the subjective data from the points of view of response behavior from participants, similarity and variability of subjective scores are examined.
Abstract: With the constant evolution of video technology and the deployment of new video services, content providers and broadcasters always face the challenge of delivering an adequate video quality which meets end-users expectations. The development of reliable quality testing and quality monitoring tools that can be used by broadcasters ultimately requires reliable objective video quality metrics. In turn, the validation of these objective models requires reliable subjective assessment, the most accurate representation of the quality perceived by end-users. Many different subjective assessment methodologies exist, and each has its advantages and drawbacks. One important element in a subjective testing methodology is the choice of the rating scale. In this paper, we make a direct comparison between four scales, which are either included in existing international standards or proposed to be used in future standardization activities related to video quality. We examine the subjective data from the points of view of response behavior from participants, similarity and variability of subjective scores. We discuss these results within the context of the subjective quality assessment of high-definition video compressed and transmitted over error-prone networks. Our experimental data show no overall statistical differences between the different scales. Results also show that the single-stimulus presentation provides highly repeatable results even if different scales or groups of participants are used.

154 citations

Book
20 Mar 2014
TL;DR: This pioneering book develops definitions and concepts related to Quality of Experience in the context of multimedia- and telecommunications-related applications, systems and services and applies these to various fields of communication and media technologies.
Abstract: This pioneering book develops definitions and concepts related to Quality of Experience in the context of multimedia- and telecommunications-related applications, systems and services and applies these to various fields of communication and media technologies. The editors bring together numerous key-protagonists of the new discipline Quality of Experience and combine the state-of-the-art knowledge in one single volume.

151 citations

Journal ArticleDOI
TL;DR: This article presents a tutorial overview of models for estimating the quality experienced by users of speech transmission and communication services, serving as a guide to an appropriate usage of the multitude of current and emerging speech quality models.
Abstract: This article presents a tutorial overview of models for estimating the quality experienced by users of speech transmission and communication services. Such models can be classified as either parametric or signal based. Signal-based models use input speech signals measured at the electrical or acoustic interfaces of the transmission channel. Parametric models, on the other hand, depend on signal and system parameters estimated during network planning or at run time. This tutorial describes the underlying principles as well as advantages and limitations of existing models. It also presents new developments, thus serving as a guide to an appropriate usage of the multitude of current and emerging speech quality models.

135 citations


Cited by
More filters
Journal ArticleDOI
01 Jun 1959

3,442 citations

Journal ArticleDOI
01 Dec 2006
TL;DR: Models and Methods in Social Network Analysis presents the most important developments in quantitative models and methods for analyzing social network data that have appeared during the 1990s.
Abstract: Models and Methods in Social Network Analysis presents the most important developments in quantitative models and methods for analyzing social network data that have appeared during the 1990s. Intended as a complement to Wasserman and Faust’s Social Network Analysis: Methods and Applications, it is a collection of original articles by leading methodologists reviewing recent advances in their particular areas of network methods. Reviewed are advances in network measurement, network sampling, the analysis of centrality, positional analysis or blockmodeling, the analysis of diffusion through networks, the analysis of affiliation or “two-mode” networks, the theory of random graphs, dependence graphs, exponential families of random graphs, the analysis of longitudinal network data, graphic techniques for exploring network data, and software for the analysis of social networks.

855 citations

Posted Content
TL;DR: The authors investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available, and provide qualitative and quantitative results highlighting specific weaknesses in existing metrics and provide recommendations for future development of better automatic evaluation metrics.
Abstract: We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model's generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.

838 citations

Proceedings ArticleDOI
25 Mar 2016
TL;DR: The authors investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available, and provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and recommend recommendations for future development of better automatic evaluation metrics.
Abstract: We investigate evaluation metrics for dialogue response generation systems where supervised labels, such as task completion, are not available. Recent works in response generation have adopted metrics from machine translation to compare a model’s generated response to a single target response. We show that these metrics correlate very weakly with human judgements in the non-technical Twitter domain, and not at all in the technical Ubuntu domain. We provide quantitative and qualitative results highlighting specific weaknesses in existing metrics, and provide recommendations for future development of better automatic evaluation metrics for dialogue systems.

814 citations