scispace - formally typeset
Search or ask a question
Author

M.O. Bici

Bio: M.O. Bici is an academic researcher from Middle East Technical University. The author has contributed to research in topics: Wavelet transform & Forward error correction. The author has an hindex of 4, co-authored 9 publications receiving 47 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The goal of this paper is to extend the original OPQ method with advanced research methods that have become popular in related research and the component model to be able to generalize individual attributes into a terminology of Quality of Experience.
Abstract: The Open Profiling of Quality (OPQ) is a mixed methods approach combining a conventional quantitative psychoperceptual evaluation and qualitative descriptive quality evaluation based on naive participants' individual vocabulary. The method targets evaluation of heterogeneous and multimodal stimulus material. The current OPQ data collection procedure provides a rich pool of data, but full benefit of it has neither been taken in the analysis to build up completeness in understanding the phenomenon under the study nor has the procedure in the analysis been probed with alternative methods. The goal of this paper is to extend the original OPQ method with advanced research methods that have become popular in related research and the component model to be able to generalize individual attributes into a terminology of Quality of Experience. We conduct an extensive subjective quality evaluation study for 3D video on mobile device with heterogeneous stimuli. We vary factors on content, media (coding, concealments, and slice modes), and transmission levels (channel loss rate). The results showed that advanced procedures in the analysis cannot only complement each other but also draw deeper understanding on Quality of Experience.

17 citations

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper addresses the problem of 3D model transmission over error-prone channels using multiple description coding (MDC) and achieves competitive compression performance compared with existing multiple description methods.
Abstract: In this paper, we address the problem of 3D model transmission over error-prone channels using multiple description coding (MDC). The objective of MDC is to encode a source into multiple bitstreams, called descriptions, supporting multiple quality levels of decoding. Compared to layered coding techniques, each description can be decoded independently to approximate the model. In the proposed approach, the mesh geometry is compressed using multiresolution geometry compression. Then multiple descriptions are obtained by applying multiple description scalar quantization (MDSQ) to the obtained wavelet coefficients. Experimental results show that, the proposed approach achieves competitive compression performance compared with existing multiple description methods.

14 citations

Proceedings ArticleDOI
07 May 2007
TL;DR: This work presents a multiple description coding (MDC) scheme for compressed three dimensional (3D) meshes based on forward error correction (FEC) that allows flexible allocation of coding redundancy for reliable transmission over error-prone channels.
Abstract: This work presents a multiple description coding (MDC) scheme for compressed three dimensional (3D) meshes based on forward error correction (FEC). It allows flexible allocation of coding redundancy for reliable transmission over error-prone channels. The proposed scheme is based on progressive geometry compression, which is performed by using wavelet transform and modified SPIHT algorithm. The proposed algorithm is optimized for varying packet loss rates (PLR) and channel bandwidth. Modeling distortion-rate function considerably decreases computational complexity of bit allocation.

8 citations

Proceedings ArticleDOI
12 Nov 2007
TL;DR: Experimental results show that the proposed method achieves considerably better expected quality compared to previous packet-loss resilient schemes.
Abstract: This paper presents an efficient joint source-channel coding scheme based on forward error correction (FEC) for three dimensional (3D) models. The system employs a wavelet based zero-tree 3D mesh coder based on progressive geometry compression (PGC). Reed-Solomon (RS) codes are applied to the embedded output bitstream to add resiliency to packet losses. Two-state Markovian channel model is employed to model packet losses. The proposed method applies approximately optimal and unequal FEC across packets. Therefore the scheme is scalable to varying network bandwidth and packet loss rates (PLR). In addition, distortion-rate (D-R) curve is modeled to decrease the computational complexity. Experimental results show that the proposed method achieves considerably better expected quality compared to previous packet-loss resilient schemes.

5 citations

Proceedings ArticleDOI
14 Nov 2005
TL;DR: Experimental results show that basic block matching gives better results than ground truth, especially on occluded regions and boundaries.
Abstract: In order to compress stereo image pairs effectively, disparity compensation is the most widely used method. In this paper we examined the effects of using different disparity maps and their properties in an embedded JPEG2000 based disparity compensated stereo image coder. These properties include the block size, estimation method and the resulting entropy of the disparity map. Experimental results show that basic block matching gives better results than ground truth, especially on occluded regions and boundaries.

3 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article presents MFA, reviews recent extensions, and illustrates it with a detailed example that shows the common factor scores could be obtained by replacing the original normalized data tables by the normalized factor scores obtained from the PCA of each of these tables.
Abstract: Multiple factor analysis MFA, also called multiple factorial analysis is an extension of principal component analysis PCA tailored to handle multiple data tables that measure sets of variables coll...

333 citations

Journal ArticleDOI
TL;DR: 3DTV coding technology is maturating, however, the research area is relatively young compared to coding of other types of media, and there is still a lot of room for improvement and new development of algorithms.
Abstract: Research efforts on 3DTV technology have been strengthened worldwide recently, covering the whole media processing chain from capture to display. Different 3DTV systems rely on different 3D scene representations that integrate various types of data. Efficient coding of these data is crucial for the success of 3DTV. Compression of pixel-type data including stereo video, multiview video, and associated depth or disparity maps extends available principles of classical video coding. Powerful algorithms and open international standards for multiview video coding and coding of video plus depth data are available and under development, which will provide the basis for introduction of various 3DTV systems and services in the near future. Compression of 3D mesh models has also reached a high level of maturity. For static geometry, a variety of powerful algorithms are available to efficiently compress vertices and connectivity. Compression of dynamic 3D geometry is currently a more active field of research. Temporal prediction is an important mechanism to remove redundancy from animated 3D mesh sequences. Error resilience is important for transmission of data over error prone channels, and multiple description coding (MDC) is a suitable way to protect data. MDC of still images and 2D video has already been widely studied, whereas multiview video and 3D meshes have been addressed only recently. Intellectual property protection of 3D data by watermarking is a pioneering research area as well. The 3D watermarking methods in the literature are classified into three groups, considering the dimensions of the main components of scene representations and the resulting components after applying the algorithm. In general, 3DTV coding technology is maturating. Systems and services may enter the market in the near future. However, the research area is relatively young compared to coding of other types of media. Therefore, there is still a lot of room for improvement and new development of algorithms.

326 citations

Proceedings ArticleDOI
05 Jun 2013
TL;DR: The QoE-EEG-Analyser is proposed that provides a solution to automatically assess and quantify the impact of various factors contributing to user'sQoE with multimedia services, in a non-invasive way, without requiring the user to provide input about his perceived visual quality.
Abstract: Multimedia users are becoming increasingly quality-aware as the technological advances make ubiquitous the creation and delivery of high-definition multimedia content. While much research work has been conducted on multimedia quality assessment, most of the existing solutions come with their own limitations, with particular solutions being more suitable to assess particular aspects related to user's Quality of Experience (QoE). In this context, there is an increasing need for innovative solutions to assess user's QoE with multimedia services. This paper proposes the QoE-EEG-Analyser that provides a solution to automatically assess and quantify the impact of various factors contributing to user's QoE with multimedia services. The proposed approach makes use of participant's frustration level measured with a consumer-grade EEG system, the Emotiv EPOC. The main advantage of QoE-EEG-Analyser is that it enables continuous assessment of various QoE factors over the entire testing duration, in a non-invasive way, without requiring the user to provide input about his perceived visual quality. Preliminary subjective results have shown that frustration can indicate user's perceived QoE.

47 citations

Book ChapterDOI
01 Jan 2014
TL;DR: The literature within the User Experience domain can be of great value for the quality of Experience-community, especially if the latter intends to really put the recently proposed more holistic definition of Quality of Experience into practice.
Abstract: The current chapter discusses the concepts Quality of Experience and User Experience. As Quality of Experience is introduced in the previous chapter, this chapter starts with an introduction to the User Experience concept at the level of theory and practice. First its origins, definitions, and key attributes are discussed. This is followed by an overview of methods and approaches to evaluate User Experience in practice. Thereupon, we discuss both concepts in comparison. While a number of similarities are identified, these are exceeded by the number of differences, which are situated at the theoretical-conceptual level and the methodological-practical level. It is concluded that User Experience is the more mature concept, both at the level of theory and practice. Thus the literature within the User Experience domain can be of great value for the Quality of Experience-community, especially if the latter intends to really put the recently proposed more holistic definition of Quality of Experience into practice.

39 citations

Journal ArticleDOI
TL;DR: This paper reviews stereo/multiview picture quality from an engineering perspective with a focus on recent or emerging approaches and technologies used in 3D systems, addressing in particular depth issues, multiview issues, display issues, viewer issues, as well as possible measurements and standards for 3D quality.
Abstract: Stereoscopic 3D content brings with it a variety of complex technological and perceptual issues. For the percept of depth to be convincing, consistent, and comfortable, a large number of parameters throughout the imaging and processing pipeline need to be matched correctly. In practice, tradeoffs are inevitable, which may then affect the quality or comfort of the 3D viewing experience. This paper reviews stereo/multiview picture quality from an engineering perspective. With a focus on recent or emerging approaches and technologies used in 3D systems, it addresses in particular depth issues, multiview issues, display issues, viewer issues, as well as possible measurements and standards for 3D quality.

32 citations