scispace - formally typeset
Search or ask a question
Author

T. Goudas

Other affiliations: University of Piraeus
Bio: T. Goudas is an academic researcher from University of Central Greece. The author has contributed to research in topics: Image segmentation & Video tracking. The author has an hindex of 4, co-authored 7 publications receiving 79 citations. Previous affiliations of T. Goudas include University of Piraeus.

Papers
More filters
Journal ArticleDOI
TL;DR: Ratsnake is presented, a publicly available generic image annotation tool providing annotation efficiency, semantic awareness, versatility, and extensibility, features that can be exploited to transform it into an effective CAD system.
Abstract: Image segmentation and annotation are key components of image-based medical computer-aided diagnosis (CAD) systems. In this paper we present Ratsnake, a publicly available generic image annotation tool providing annotation efficiency, semantic awareness, versatility, and extensibility, features that can be exploited to transform it into an effective CAD system. In order to demonstrate this unique capability, we present its novel application for the evaluation and quantification of salient objects and structures of interest in kidney biopsy images. Accurate annotation identifying and quantifying such structures in microscopy images can provide an estimation of pathogenesis in obstructive nephropathy, which is a rather common disease with severe implication in children and infants. However a tool for detecting and quantifying the disease is not yet available. A machine learning-based approach, which utilizes prior domain knowledge and textural image features, is considered for the generation of an image force field customizing the presented tool for automatic evaluation of kidney biopsy images. The experimental evaluation of the proposed application of Ratsnake demonstrates its efficiency and effectiveness and promises its wide applicability across a variety of medical imaging domains.

51 citations

Proceedings ArticleDOI
29 May 2013
TL;DR: The constructed camera model is utilized to achieve a simple geometric reasoning that corrects gaps and mistakes of the human figure segmentation, and enables the inference of possible real world positions of a segmented cluster of pixels in the video frame.
Abstract: In this paper, we concentrate on refining the results of segmenting human presence from indoors videos acquired by a fisheye camera, using a 3D mathematical model of the camera. The model has been calibrated according to the specific indoor environment that is being monitored. Human segmentation is implemented using a standard established technique. The fisheye camera used for video acquisition is modeled using a spherical element, while the parameters of the camera model are determined only once, using the correspondence of a number of user-defined landmarks, both in real world coordinates and on the acquired video frame. Subsequently, each pixel of the video frame is inversely mapped to the direction of view in the real world and the relevant data are stored in look-up tables for very fast utilization in real-time video processing. The proposed fisheye camera model enables the inference of possible real world positions of a segmented cluster of pixels in the video frame. In this work, we utilize the constructed camera model to achieve a simple geometric reasoning that corrects gaps and mistakes of the human figure segmentation. Initial results are also presented for a small number of video sequences, which prove the efficiency of the proposed method.

13 citations

Journal ArticleDOI
TL;DR: An application, exploiting web services and applying ontological modeling, is presented, to enable the intelligent creation of image-mining workflows and a case study dealing with the creation of a sample workflow for the analysis of kidney biopsy microscopy images is presented.
Abstract: The analysis and characterization of biomedical image data is a complex procedure involving several processing phases, such as data acquisition, preprocessing, segmentation, feature extraction, and classification. The proper combination and parameterization of the utilized methods are heavily relying on the given image dataset and experiment type. They may thus necessitate advanced image processing and classification knowledge and skills from the side of the biomedical expert. In this study, an application, exploiting web services and applying ontological modeling, is presented, to enable the intelligent creation of image-mining workflows. The described tool can be directly integrated to the RapidMiner, Taverna or similar workflow management platforms. A case study dealing with the creation of a sample workflow for the analysis of kidney biopsy microscopy images is presented to demonstrate the functionality of the proposed framework.

12 citations

Proceedings ArticleDOI
11 Nov 2010
TL;DR: An open image-mining framework that provides access to tools and methods for the characterization of medical images is presented and initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.
Abstract: This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

9 citations

Proceedings ArticleDOI
01 Nov 2013
TL;DR: This work presents a system, which uses computer vision techniques for human silhouette segmentation from video in indoor environments and a parametric 3D human model, in order to recognize the posture of the monitored person.
Abstract: In this work, we present a system, which uses computer vision techniques for human silhouette segmentation from video in indoor environments and a parametric 3D human model, in order to recognize the posture of the monitored person. The video data are acquired indoors from a fixed fish-eye camera in the living environment. The implemented 3D human model collaborates with a fish-eye camera model, allowing the calculation of the real human position in the 3D-space and consequently recognizing the posture of the monitored person. The paper discusses briefly the details of the human segmentation, the camera modeling and the posture recognition methodology. Initial results are also presented for a small number of video sequences.

3 citations


Cited by
More filters
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

Journal ArticleDOI
TL;DR: An update to the taverna tool suite is provided, highlighting new features and developments in the workbench and the Taverna Server.
Abstract: The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud environments), using the Taverna Server. In bioinformatics, Taverna workflows are typically used in the areas of high-throughput omics analyses (for example, proteomics or transcriptomics), or for evidence gathering methods involving text mining or data mining. Through Taverna, scientists have access to several thousand different tools and resources that are freely available from a large range of life science institutions. Once constructed, the workflows are reusable, executable bioinformatics protocols that can be shared, reused and repurposed. A repository of public workflows is available at http://www.myexperiment.org. This article provides an update to the Taverna tool suite, highlighting new features and developments in the workbench and the Taverna Server.

724 citations

Book ChapterDOI
01 Jan 2005
TL;DR: The goal is to help developers find the most suitable language for their representation needs in the Semantic Web, which has a need for languages to represent the semantic information that this Web requires.
Abstract: being used in many other applications to explicitly declare the knowledge embedded in them. However, not only are ontologies useful for applications in which knowledge plays a key role, but they can also trigger a major change in current Web contents. This change is leading to the third generation of the Web—known as the Semantic Web—which has been defined as “the conceptual structuring of the Web in an explicit machine-readable way.”1 This definition does not differ too much from the one used for defining an ontology: “An ontology is an explicit, machinereadable specification of a shared conceptualization.”2 In fact, new ontology-based applications and knowledge architectures are developing for this new Web. A common claim for all of these approaches is the need for languages to represent the semantic information that this Web requires—solving the heterogeneous data exchange in this heterogeneous environment. Here, we don’t decide which language is best of the Semantic Web. Rather, our goal is to help developers find the most suitable language for their representation needs.

212 citations

Journal ArticleDOI
TL;DR: An in-depth critical analysis is presented that aims to inspire and align the agendas of the two scientific groups in the field of small bowel diseases.
Abstract: Video capsule endoscopy (VCE) has revolutionized the diagnostic work-up in the field of small bowel diseases. Furthermore, VCE has the potential to become the leading screening technique for the entire gastrointestinal tract. Computational methods that can be implemented in software can enhance the diagnostic yield of VCE both in terms of efficiency and diagnostic accuracy. Since the appearance of the first capsule endoscope in clinical practice in 2001, information technology (IT) research groups have proposed a variety of such methods, including algorithms for detecting haemorrhage and lesions, reducing the reviewing time, localizing the capsule or lesion, assessing intestinal motility, enhancing the video quality and managing the data. Even though research is prolific (as measured by publication activity), the progress made during the past 5 years can only be considered as marginal with respect to clinically significant outcomes. One thing is clear-parallel pathways of medical and IT scientists exist, each publishing in their own area, but where do these research pathways meet? Could the proposed IT plans have any clinical effect and do clinicians really understand the limitations of VCE software? In this Review, we present an in-depth critical analysis that aims to inspire and align the agendas of the two scientific groups.

187 citations

Journal ArticleDOI
TL;DR: A simple, yet effective approach allowing automatic detection of all types of abnormalities in capsule endoscopy is presented, which outperforms previous state-of-the-art approaches and is robust in the presence of luminal contents and is capable of detecting even very small lesions.

102 citations