scispace - formally typeset
Search or ask a question
Author

Ioannis Katramados

Bio: Ioannis Katramados is an academic researcher from Cranfield University. The author has contributed to research in topics: Feature (computer vision) & Deep learning. The author has an hindex of 7, co-authored 19 publications receiving 249 citations. Previous affiliations of Ioannis Katramados include Stenden University of Applied Sciences & University of Bedfordshire.

Papers
More filters
Book ChapterDOI
Abstract: The scarcity of labeled data often limits the application of supervised deep learning techniques for medical image segmentation. This has motivated the development of semi-supervised techniques that learn from a mixture of labeled and unlabeled images. In this paper, we propose a novel semi-supervised method that, in addition to supervised learning on labeled training images, learns to predict segmentations consistent under a given class of transformations on both labeled and unlabeled images. More specifically, in this work we explore learning equivariance to elastic deformations. We implement this through: 1) a Siamese architecture with two identical branches, each of which receives a differently transformed image, and 2) a composite loss function with a supervised segmentation loss term and an unsupervised term that encourages segmentation consistency between the predictions of the two branches. We evaluate the method on a public dataset of chest radiographs with segmentations of anatomical structures using 5-fold cross-validation. The proposed method reaches significantly higher segmentation accuracy compared to supervised learning. This is due to learning transformation consistency on both labeled and unlabeled images, with the latter contributing the most. We achieve the performance comparable to state-of-the-art chest X-ray segmentation methods while using substantially fewer labeled images.

91 citations

Book ChapterDOI
13 Oct 2019
TL;DR: A novel semi-supervised method that, in addition to supervised learning on labeled training images, learns to predict segmentations consistent under a given class of transformations on both labeled and unlabeled images.
Abstract: The scarcity of labeled data often limits the application of supervised deep learning techniques for medical image segmentation. This has motivated the development of semi-supervised techniques that learn from a mixture of labeled and unlabeled images. In this paper, we propose a novel semi-supervised method that, in addition to supervised learning on labeled training images, learns to predict segmentations consistent under a given class of transformations on both labeled and unlabeled images. More specifically, in this work we explore learning equivariance to elastic deformations. We implement this through: (1) a Siamese architecture with two identical branches, each of which receives a differently transformed image, and (2) a composite loss function with a supervised segmentation loss term and an unsupervised term that encourages segmentation consistency between the predictions of the two branches. We evaluate the method on a public dataset of chest radiographs with segmentations of anatomical structures using 5-fold cross-validation. The proposed method reaches significantly higher segmentation accuracy compared to supervised learning. This is due to learning transformation consistency on both labeled and unlabeled images, with the latter contributing the most. We achieve the performance comparable to state-of-the-art chest X-ray segmentation methods while using substantially fewer labeled images.

75 citations

Journal ArticleDOI
TL;DR: The main novelty of this instrument relies in the sub-pixel accuracy of the tracking algorithm that enables robust measurement of the deterioration of the mobility of Artemia salina even at very low concentrations of toxic metals.

59 citations

Book ChapterDOI
14 Oct 2009
TL;DR: A real-time approach for traversable surface detection using a low-cost monocular camera mounted on an autonomous vehicle and the effect of colourspace fusion on the system's precision is analysed.
Abstract: We present a real-time approach for traversable surface detection using a low-cost monocular camera mounted on an autonomous vehicle The proposed methodology extracts colour and texture information from various channels of the HSL, YCbCr and LAB colourspaces by temporal analysis in order to create a "traversability map" On this map lighting and water artifacts are eliminated including shadows, reflections and water prints Additionally, camera vibration is compensated by temporal filtering leading to robust path edge detection in blurry images The performance of this approach is extensively evaluated over varying terrain and environmental conditions and the effect of colourspace fusion on the system's precision is analysed The results show a mean accuracy of 97% over this comprehensive test set

38 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 2014
TL;DR: This paper presents a generic break down of the problem of road or lane perception into its functional building blocks and elaborate the wide range of proposed methods within this scheme.
Abstract: The problem of road or lane perception is a crucial enabler for advanced driver assistance systems. As such, it has been an active field of research for the past two decades with considerable progress made in the past few years. The problem was confronted under various scenarios, with different task definitions, leading to usage of diverse sensing modalities and approaches. In this paper we survey the approaches and the algorithmic techniques devised for the various modalities over the last 5 years. We present a generic break down of the problem into its functional building blocks and elaborate the wide range of proposed methods within this scheme. For each functional block, we describe the possible implementations suggested and analyze their underlying assumptions. While impressive advancements were demonstrated at limited scenarios, inspection into the needs of next generation systems reveals significant gaps. We identify these gaps and suggest research directions that may bridge them.

735 citations

01 Dec 2004
TL;DR: In this article, a novel technique for detecting salient regions in an image is described, which is a generalization to affine invariance of the method introduced by Kadir and Brady.
Abstract: In this paper we describe a novel technique for detecting salient regions in an image. The detector is a generalization to affine invariance of the method introduced by Kadir and Brady [10]. The detector deems a region salient if it exhibits unpredictability in both its attributes and its spatial scale.

501 citations

Journal ArticleDOI
TL;DR: This article provides a detailed review of the solutions above, summarizing both the technical novelties and empirical results, and compares the benefits and requirements of the surveyed methodologies and provides recommended solutions.

487 citations

Journal ArticleDOI
TL;DR: The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons.
Abstract: This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area.

264 citations

Journal ArticleDOI
TL;DR: A taxonomy for PervasiveAugmented Reality and context-aware Augmented Reality is presented, which classifies context sources and context targets relevant for implementing such a context- aware, continuous Augmented reality experience.
Abstract: Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user’s current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.

236 citations