scispace - formally typeset
Search or ask a question
Author

Mahsa T. Pourazad

Bio: Mahsa T. Pourazad is an academic researcher from University of British Columbia. The author has contributed to research in topics: Video quality & Data compression. The author has an hindex of 21, co-authored 118 publications receiving 1845 citations. Previous affiliations of Mahsa T. Pourazad include University of Toronto & University of Winnipeg.


Papers
More filters
01 Jan 2012
TL;DR: The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts group (VCEG) to establish the JCT-VC, with the objective to develop a new high-performance video coding standard.
Abstract: Digital video has become ubiquitous in our everyday lives; everywhere we look, there are devices that can display, capture, and transmit video. The recent advances in technology have made it possible to capture and display video material with ultrahigh definition (UHD) resolution. Now is the time when the current Internet and broadcasting networks do not even have sufficient capacity to transmit large amounts of HD content-Let alone UHD. The need for an improved transmission system is more pronounced in the mobile sector because of the introduction of lightweight HD resolutions (such as 720 pixel) for mobile applications. The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts Group (VCEG) to establish the Joint Collaborative Team on Video Coding (JCT-VC), with the objective to develop a new high-performance video coding standard.

281 citations

Journal ArticleDOI
TL;DR: In this paper, the Joint Collaborative Team on Video Coding (JCT-VC) was established with the objective to develop a new high-performance video coding standard for mobile applications.
Abstract: Digital video has become ubiquitous in our everyday lives; everywhere we look, there are devices that can display, capture, and transmit video. The recent advances in technology have made it possible to capture and display video material with ultrahigh definition (UHD) resolution. Now is the time when the current Internet and broadcasting networks do not even have sufficient capacity to transmit large amounts of HD content-Let alone UHD. The need for an improved transmission system is more pronounced in the mobile sector because of the introduction of lightweight HD resolutions (such as 720 pixel) for mobile applications. The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts Group (VCEG) to establish the Joint Collaborative Team on Video Coding (JCT-VC), with the objective to develop a new high-performance video coding standard.

245 citations

Journal ArticleDOI
TL;DR: In this paper, the relationship between 3D quality and bitrate at different frame rates was investigated. But the authors focused on the case of 2D video and not for 3D.
Abstract: Increasing the frame rate of a 3D video generally results in improved Quality of Experience (QoE). However, higher frame rates involve a higher degree of complexity in capturing, transmission, storage, and display. The question that arises here is what frame rate guarantees high viewing quality of experience given the existing/required 3D devices and technologies (3D cameras, 3D TVs, compression, transmission bandwidth, and storage capacity). This question has already been addressed for the case of 2D video, but not for 3D. The objective of this paper is to study the relationship between 3D quality and bitrate at different frame rates. Our performance evaluations show that increasing the frame rate of 3D videos beyond 60 fps may not be visually distinguishable. In addition, our experiments show that when the available bandwidth is reduced, the highest possible 3D quality of experience can be achieved by adjusting (decreasing) the frame rate instead of increasing the compression ratio. The results of our study are of particular interest to network providers for rate adaptation in variable bitrate channels.

88 citations

Posted Content
TL;DR: The objective of this paper is to study the relationship between 3D quality and bitrate at different frame rates and show that increasing the frame rate of 3D videos beyond 60 fps may not be visually distinguishable.
Abstract: Increasing the frame rate of a 3D video generally results in improved Quality of Experience (QoE). However, higher frame rates involve a higher degree of complexity in capturing, transmission, storage, and display. The question that arises here is what frame rate guarantees high viewing quality of experience given the existing/required 3D devices and technologies (3D cameras, 3D TVs, compression, transmission bandwidth, and storage capacity). This question has already been addressed for the case of 2D video, but not for 3D. The objective of this paper is to study the relationship between 3D quality and bitrate at different frame rates. Our performance evaluations show that increasing the frame rate of 3D videos beyond 60 fps may not be visually distinguishable. In addition, our experiments show that when the available bandwidth is reduced, the highest possible 3D quality of experience can be achieved by adjusting (decreasing) the frame rate instead of increasing the compression ratio. The results of our study are of particular interest to network providers for rate adaptation in variable bitrate channels.

81 citations

Journal ArticleDOI
TL;DR: It was found that the proposed HS cancellation method successfully removes HS from lung sound signals while preserving the original fundamental components of the lung sounds.
Abstract: During lung sound recordings, heart sounds (HS) interfere with clinical interpretation of lung sounds over the low frequency components which is significant especially at low flow rates. Hence, it is desirable to cancel the effect of HS on lung sound records. In this paper, a novel HS cancellation method is presented. This method first localizes HS segments using multiresolution decomposition of the wavelet transform coefficients, then removes those segments from the original lung sound record and estimates the missing data via a 2D interpolation in the time-frequency (TF) domain. Finally, the signal is reconstructed into the time domain. To evaluate the efficiency of the TF filtering, the average power spectral density (PSD) of the original lung sound segments with and without HS over four frequency bands from 20 to 300 Hz were calculated and compared with the average PSD of the filtered signals. Statistical tests show that there is no significant difference between the average PSD of the HS-free original lung sounds and the TF-filtered signal for all frequency bands at both low and medium flow rates. It was found that the proposed method successfully removes HS from lung sound signals while preserving the original fundamental components of the lung sounds.

64 citations


Cited by
More filters
01 Jan 2006

3,012 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure, and proposes a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values.
Abstract: Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

374 citations

Journal ArticleDOI
20 Nov 2017
TL;DR: In this article, a deep convolutional neural network (CNN) is proposed to predict information that has been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure.
Abstract: Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

316 citations

Journal ArticleDOI
TL;DR: A comprehensive survey of the evolution of video quality assessment methods, analyzing their characteristics, advantages, and drawbacks and identifying the future research directions of QoE is given.
Abstract: Quality of experience (QoE) is the perceptual quality of service (QoS) from the users' perspective. For video service, the relationship between QoE and QoS (such as coding parameters and network statistics) is complicated because users' perceptual video quality is subjective and diversified in different environments. Traditionally, QoE is obtained from subjective test, where human viewers evaluate the quality of tested videos under a laboratory environment. To avoid high cost and offline nature of such tests, objective quality models are developed to predict QoE based on objective QoS parameters, but it is still an indirect way to estimate QoE. With the rising popularity of video streaming over the Internet, data-driven QoE analysis models have newly emerged due to availability of large-scale data. In this paper, we give a comprehensive survey of the evolution of video quality assessment methods, analyzing their characteristics, advantages, and drawbacks. We also introduce QoE-based video applications and, finally, identify the future research directions of QoE.

296 citations