scispace - formally typeset
Search or ask a question
Author

Chaminda T. E. R. Hewage

Bio: Chaminda T. E. R. Hewage is an academic researcher from Cardiff Metropolitan University. The author has contributed to research in topics: Video quality & Scalable Video Coding. The author has an hindex of 17, co-authored 67 publications receiving 1227 citations. Previous affiliations of Chaminda T. E. R. Hewage include University of Surrey & Kingston University.


Papers
More filters
Journal ArticleDOI
TL;DR: The results show that, VQM quality measures of individual left and right views can be effectively used in predicting the overall image quality and statistical measures like PSNR and SSIM of left andright views illustrate good correlations with depth perception of 3D video.
Abstract: The 3D (3-dimensional) video technologies are emerging to provide more immersive media content compared to conventional 2D (2-dimensional) video applications. More often 3D video quality is measured using rigorous and time-consuming subjective evaluation test campaigns. This is due to the fact that 3D video quality can be described as a combination of several perceptual attributes such as overall image quality, perceived depth, presence, naturalness and eye strain, etc. Hence this paper investigates the relationship between subjective quality measures and several objective quality measures like PSNR, SSIM, and VQM for 3D video content. The 3D video content captured using both stereo camera pair (two cameras for left and right views) and colour-and-depth special range cameras are considered in this study. The results show that, VQM quality measures of individual left and right views (rendered left and right views for colour-and-depth sequences) can be effectively used in predicting the overall image quality and statistical measures like PSNR and SSIM of left and right views illustrate good correlations with depth perception of 3D video.

170 citations

Journal ArticleDOI
TL;DR: The correlation between subjective and objective evaluation of color plus depth video and transmission over Internet protocol (IP) is investigated, and subjective results are used to determine more accurate objective quality assessment metrics for 3D color plus Depth video.
Abstract: In the near future, many conventional video applications are likely to be replaced by immersive video to provide a sense of ldquobeing there.rdquo This transition is facilitated by the recent advancement of 3D capture, coding, transmission, and display technologies. Stereoscopic video is the simplest form of 3D video available in the literature. ldquoColor plus depth maprdquo based stereoscopic video has attracted significant attention, as it can reduce storage and bandwidth requirements for the transmission of stereoscopic content over communication channels. However, quality assessment of coded video sequences can currently only be performed reliably using expensive and inconvenient subjective tests. To enable researchers to optimize 3D video systems in a timely fashion, it is essential that reliable objective measures are found. This paper investigates the correlation between subjective and objective evaluation of color plus depth video. The investigation is conducted for different compression ratios, and different video sequences. Transmission over Internet protocol (IP) is also investigated. Subjective tests are performed to determine the image quality and depth perception of a range of differently coded video sequences, with packet loss rates ranging from 0% to 20%. The subjective results are used to determine more accurate objective quality assessment metrics for 3D color plus depth video.

169 citations

Journal ArticleDOI
TL;DR: Investigation is the correlation between subjective and objective evaluations of colour plus depth map 3-D video, which is used to determine more accurate objective quality assessment metrics for colour plusdepth map based stereoscopic video.
Abstract: The timely deployment of three-dimensional (3-D) video applications requires accurate objective quality measures, so that time consuming subjective tests can be avoided. Investigated is the correlation between subjective and objective evaluations of colour plus depth map 3-D video. Subjective tests are performed to determine the overall image quality and depth perception of a range of asymmetrically coded video sequences. The subjective results are used to determine more accurate objective quality assessment metrics for colour plus depth map based stereoscopic video.

110 citations

Proceedings ArticleDOI
07 Jun 2010
TL;DR: A Reduced-reference quality metric for 3D depth map transmission using the extracted edge information is proposed, motivated by the fact that the edges and contours of the depth map can represent different depth levels and hence can be used in quality evaluations.
Abstract: Due to the technological advancement of 3D video technologies and the availability of other supportive services such as high bandwidth communication links, introduction of immersive video services to the mass market is imminent. However, in order to provide better service to demanding customers, the transmission system parameters need to be changed “on the fly”. Measured 3D video quality at the receiver side can be used as feedback information to fine tune the system. However, measuring 3D video quality using Full-reference quality metrics will not be feasible due to the need of original 3D vide sequence at the receiver side. Therefore, this paper proposed a Reduced-reference quality metric for 3D depth map transmission using the extracted edge information. This work is motivated by the fact that the edges and contours of the depth map can represent different depth levels and hence can be used in quality evaluations. Performance of the method is evaluated across a range of Packet Loss Rates (PLRs) and shows acceptable results compared to its counterpart Full-reference quality metric.

72 citations

Journal ArticleDOI
TL;DR: It is argued that no protocol is a silver bullet, therefore no protocol should be selected carefully, considering the sector requirements and environment, and listed these protocols against basic features and sector preference in a tabular format to facilitate selection.
Abstract: Advancement of consensus protocols in recent years has enabled distributed ledger technologies (DLTs) to find its application and value in sectors beyond cryptocurrencies. Here we reviewed 66 known consensus protocols and classified them into philosophical and architectural categories, also providing a visual representation. As a case study, we focus on the public sector and highlighted potential protocols. We have also listed these protocols against basic features and sector preference in a tabular format to facilitate selection. We argue that no protocol is a silver bullet, therefore should be selected carefully, considering the sector requirements and environment.

66 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The Limits of Organization as discussed by the authors is a seminal work in the field of economic analysis and policy making, focusing on the role of organization in economic decision-making, and its effect on economic outcomes.
Abstract: (1975). The Limits of Organization. Journal of Economic Issues: Vol. 9, No. 3, pp. 543-544.

1,138 citations

Journal ArticleDOI
TL;DR: The importance of various causes and aspects of visual discomfort is clarified and three-dimensional artifacts resulting from insufficient depth information in the incoming data signal yielding spatial and temporal inconsistencies are believed to be the most pertinent.
Abstract: Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays. In this paper, the importance of various causes and aspects of visual discomfort is clarified. When disparity values do not surpass a limit of 1°, which still provides sufficient range to allow satisfactory depth perception in stereoscopic television, classical determinants such as excessive binocular parallax and accommodation-vergence conflict appear to be of minor importance. Visual discomfort, however, may still occur within this limit and we believe the following factors to be the most pertinent in contributing to this: (1) temporally changing demand of accommodation-vergence linkage, e.g., by fast motion in depth; (2) three-dimensional artifacts resulting from insufficient depth information in the incoming data signal yielding spatial and temporal inconsistencies; and (3) unnatural blur. In order to ad- equately characterize and understand visual discomfort, multiple types of measurements, both objective and subjective, are required. © 2009 Society for Imaging Science and Technology. DOI: 10.2352/J.ImagingSci.Technol.2009.53.3.030201

990 citations

Journal ArticleDOI
TL;DR: A novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts and the experimental results show that the proposed method is highly competitive compared with other state-of-the-art approaches.
Abstract: To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

444 citations

Journal ArticleDOI
TL;DR: Experimental results confirm the hypothesis and show that the proposed framework significantly outperforms conventional 2D QA metrics when predicting the quality of stereoscopically viewed images that may have been asymmetrically distorted.
Abstract: We develop a framework for assessing the quality of stereoscopic images that have been afflicted by possibly asymmetric distortions. An intermediate image is generated which when viewed stereoscopically is designed to have a perceived quality close to that of the cyclopean image. We hypothesize that performing stereoscopic QA on the intermediate image yields higher correlations with human subjective judgments. The experimental results confirm the hypothesis and show that the proposed framework significantly outperforms conventional 2D QA metrics when predicting the quality of stereoscopically viewed images that may have been asymmetrically distorted.

348 citations

Proceedings ArticleDOI
Lang long1
16 Aug 2004
TL;DR: It is shown that there exists a threshold on sensor outage probability above which a distributed random access protocol (such as ALOHA) outperforms the centralized deterministic schedulers.
Abstract: Summary form only given. The layered architecture is one of the key reasons behind the explosive and continuing growth of the Internet. There are, however, special networks in which cross-layer design is appropriate and may even be necessary. Two such cases are small wireless LAN and large-scale sensor networks. We consider first the design of medium access control (MAC) for a small wireless LAN based on a multiuser physical layer. We present a complete characterization of the throughput region and present conditions under which ALOHA is optimal. Next we consider the estimation of signal field using data collected from a large scale sensor network. The impact of medium access control on estimation is examined. We show that there exists a threshold on sensor outage probability above which a distributed random access protocol (such as ALOHA) outperforms the centralized deterministic schedulers.

335 citations