Author
Michael G. Strintzis
Other affiliations: Information Technology Institute, University of Pittsburgh
Bio: Michael G. Strintzis is an academic researcher from Aristotle University of Thessaloniki. The author has contributed to research in topics: Motion estimation & Image segmentation. The author has an hindex of 44, co-authored 240 publications receiving 6319 citations. Previous affiliations of Michael G. Strintzis include Information Technology Institute & University of Pittsburgh.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The transmission of JPEG2000 images over wireless channels is examined using reorganization of the compressed images into error-resilient, product-coded streams which are shown to outperform other algorithms which were recently proposed for the wireless transmission of images.
Abstract: The transmission of JPEG2000 images over wireless channels is examined using reorganization of the compressed images into error-resilient, product-coded streams. The product-code consists of Turbo-codes and Reed-Solomon codes which are optimized using an iterative process. The generation of the stream to be transmitted is performed directly using compressed JPEG2000 streams. The resulting scheme is tested for the transmission of compressed JPEG2000 images over wireless channels and is shown to outperform other algorithms which were recently proposed for the wireless transmission of images.
258 citations
••
01 Oct 1998
TL;DR: A generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques.
Abstract: The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.
240 citations
••
29 May 2005
TL;DR: This paper uses M-OntoMat-Annotizer in order to construct ontologies that include prototypical instances of high-level domain concepts together with a formal specification of corresponding visual descriptors, allowing for new kinds of multimedia content analysis and reasoning.
Abstract: Annotations of multimedia documents typically have been pursued in two different directions. Either previous approaches have focused on low level descriptors, such as dominant color, or they have focused on the content dimension and corresponding annotations, such as person or vehicle. In this paper, we present a software environment to bridge between the two directions. M-OntoMat-Annotizer allows for linking low level MPEG-7 visual descriptions to conventional Semantic Web ontologies and annotations. We use M-OntoMat-Annotizer in order to construct ontologies that include prototypical instances of high-level domain concepts together with a formal specification of corresponding visual descriptors. Thus, we formalize the interrelationship of high- and low-level multimedia concept descriptions allowing for new kinds of multimedia content analysis and reasoning.
225 citations
••
TL;DR: The proposed face recognition technique is based on the implementation of the principal component analysis algorithm and the extraction of depth and colour eigenfaces and Experimental results show significant gains attained with the addition of depth information.
196 citations
••
24 Nov 2003TL;DR: The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches,Which assume that the user queries for images similar to one that already is at his disposal.
Abstract: In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.
195 citations
Cited by
More filters
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.
10,141 citations
••
01 Jul 1999TL;DR: An overview of the information-hiding techniques field is given, of what the authors know, what works, what does not, and what are the interesting topics for research.
Abstract: Information-hiding techniques have recently become important in a number of application areas. Digital audio, video, and pictures are increasingly furnished with distinguishing but imperceptible marks, which may contain a hidden copyright notice or serial number or even help to prevent unauthorized copying directly. Military communications systems make increasing use of traffic security techniques which, rather than merely concealing the content of a message using encryption, seek to conceal its sender, its receiver, or its very existence. Similar techniques are used in some mobile phone systems and schemes proposed for digital elections. Criminals try to use whatever traffic security properties are provided intentionally or otherwise in the available communications systems, and police forces try to restrict their use. However, many of the techniques proposed in this young and rapidly evolving field can trace their history back to antiquity, and many of them are surprisingly easy to circumvent. In this article, we try to give an overview of the field, of what we know, what works, what does not, and what are the interesting topics for research.
2,561 citations
••
2,345 citations
01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.
2,188 citations
••
TL;DR: The authors emphasize on an intuitive understanding of CS by describing the CS reconstruction as a process of interference cancellation, and there is also an emphasis on the understanding of the driving factors in applications.
Abstract: This article reviews the requirements for successful compressed sensing (CS), describes their natural fit to MRI, and gives examples of four interesting applications of CS in MRI. The authors emphasize on an intuitive understanding of CS by describing the CS reconstruction as a process of interference cancellation. There is also an emphasis on the understanding of the driving factors in applications, including limitations imposed by MRI hardware, by the characteristics of different types of images, and by clinical concerns.
2,134 citations