scispace - formally typeset
Search or ask a question
Author

Dimitrios Tzovaras

Bio: Dimitrios Tzovaras is an academic researcher from Information Technology Institute. The author has contributed to research in topics: Computer science & Motion estimation. The author has an hindex of 40, co-authored 691 publications receiving 6540 citations. Previous affiliations of Dimitrios Tzovaras include Aristotle University of Thessaloniki & University of Western Macedonia.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed face recognition technique is based on the implementation of the principal component analysis algorithm and the extraction of depth and colour eigenfaces and Experimental results show significant gains attained with the addition of depth information.
Abstract: In the present paper a face recognition technique is developed based on depth and colour information. The main objective of the paper is to evaluate three different approaches (colour, depth, combination of colour and depth) for face recognition and quantify the contribution of depth. The proposed face recognition technique is based on the implementation of the principal component analysis algorithm and the extraction of depth and colour eigenfaces. Experimental results show significant gains attained with the addition of depth information.

196 citations

Journal ArticleDOI
TL;DR: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling, and the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
Abstract: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

145 citations

Journal ArticleDOI
TL;DR: Mobile apps are considered to be a valuable tool for citizens, health professionals, and decision makers in facing critical challenges imposed by the pandemic, such as reducing the burden on hospitals, providing access to credible information, tracking the symptoms and mental health of individuals, and discovering new predictors.
Abstract: Background: A vast amount of mobile apps have been developed during the past few months in an attempt to “flatten the curve” of the increasing number of COVID-19 cases. Objective: This systematic review aims to shed light into studies found in the scientific literature that have used and evaluated mobile apps for the prevention, management, treatment, or follow-up of COVID-19. Methods: We searched the bibliographic databases Global Literature on Coronavirus Disease, PubMed, and Scopus to identify papers focusing on mobile apps for COVID-19 that show evidence of their real-life use and have been developed involving clinical professionals in their design or validation. Results: Mobile apps have been implemented for training, information sharing, risk assessment, self-management of symptoms, contact tracing, home monitoring, and decision making, rapidly offering effective and usable tools for managing the COVID-19 pandemic. Conclusions: Mobile apps are considered to be a valuable tool for citizens, health professionals, and decision makers in facing critical challenges imposed by the pandemic, such as reducing the burden on hospitals, providing access to credible information, tracking the symptoms and mental health of individuals, and discovering new predictors.

132 citations

Journal ArticleDOI
TL;DR: A novel frequency-domain technique for image blocking artifact detection and reduction is presented and experimental results illustrating the performance of the proposed method are presented and evaluated.
Abstract: A novel frequency-domain technique for image blocking artifact detection and reduction is presented. The algorithm first detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the discrete cosine transform (DCT) coefficients are modeled by a Laplacian probability function. Then, for each block affected by blocking artifacts, its DC and AC coefficients are recalculated for artifact reduction. To achieve this, a closed-form representation of the optimal correction of the DCT coefficients is produced by minimizing a novel enhanced form of the mean squared difference of slope for every frequency separately. This correction of each DCT coefficient depends on the eight neighboring coefficients in the subband-like representation of the DCT transform and is constrained by the quantization upper and lower bound. Experimental results illustrating the performance of the proposed method are presented and evaluated.

125 citations

Journal ArticleDOI
TL;DR: An object-based coding scheme is proposed for the coding of a stereoscopic image sequence using motion and disparity information and the use of the depth map information for the generation of intermediate views at the receiver is discussed.
Abstract: An object-based coding scheme is proposed for the coding of a stereoscopic image sequence using motion and disparity information. A hierarchical block-based motion estimation approach is used for initialization, while disparity estimation is performed using a pixel-based hierarchical dynamic programming algorithm. A split-and-merge segmentation procedure based on three-dimensional (3-D) motion modeling is then used to determine regions with similar motion parameters. The segmentation part of the algorithm is interleaved with the estimation part in order to optimize the coding performance of the procedure. Furthermore, a technique is examined for propagating the segmentation information with time. A 3-D motion-compensated prediction technique is used for both intensity and depth image sequence coding. Error images and depth maps are encoded using discrete cosine transform (DCT) and Huffman methods. Alternately, an efficient wireframe depth modeling technique may be used to convey depth information to the receiver. Motion and wireframe model parameters are then quantized and transmitted to the decoder along with the segmentation information. As a straightforward application, the use of the depth map information for the generation of intermediate views at the receiver is also discussed. The performance of the proposed compression methods is evaluated experimentally and is compared to other stereoscopic image sequence coding schemes.

124 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations