scispace - formally typeset
Search or ask a question
Author

Dimitris Maroulis

Other affiliations: Athens State University
Bio: Dimitris Maroulis is an academic researcher from National and Kapodistrian University of Athens. The author has contributed to research in topics: Image segmentation & Active contour model. The author has an hindex of 25, co-authored 131 publications receiving 2173 citations. Previous affiliations of Dimitris Maroulis include Athens State University.


Papers
More filters
Journal ArticleDOI
01 Sep 2003
TL;DR: An approach to the detection of tumors in colonoscopic video based on a new color feature extraction scheme to represent the different regions in the frame sequence based on the wavelet decomposition, reaching 97% specificity and 90% sensitivity.
Abstract: We present an approach to the detection of tumors in colonoscopic video. It is based on a new color feature extraction scheme to represent the different regions in the frame sequence. This scheme is built on the wavelet decomposition. The features named as color wavelet covariance (CWC) are based on the covariances of second-order textural measures and an optimum subset of them is proposed after the application of a selection algorithm. The proposed approach is supported by a linear discriminant analysis (LDA) procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data sets of color colonoscopic videos. The performance in the detection of abnormal colonic regions corresponding to adenomatous polyps has been estimated high, reaching 97% specificity and 90% sensitivity.

480 citations

Book ChapterDOI
25 Jun 2008
TL;DR: The proposed Fuzzy Local Binary Pattern approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images and the results validate its effectiveness over LBP and other common feature extraction methods.
Abstract: B-scan ultrasound provides a non-invasive low-cost imaging solution to primary care diagnostics. The inherent speckle noise in the images produced by this technique introduces uncertainty in the representation of their textural characteristics. To cope with the uncertainty, we propose a novel fuzzy feature extraction method to encode local texture. The proposed method extends the Local Binary Pattern (LBP) approach by incorporating fuzzy logic in the representation of local patterns of texture in ultrasound images. Fuzzification allows a Fuzzy Local Binary Pattern (FLBP) to contribute to more than a single bin in the distribution of the LBP values used as a feature vector. The proposed FLBP approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images. The results validate its effectiveness over LBP and other common feature extraction methods.

151 citations

Proceedings ArticleDOI
23 Jun 2005
TL;DR: The results advocate the feasibility of a computer-based system for polyp detection in video gastroscopy that exploits the textural characteristics of the gastric mucosa in conjunction with its color appearance.
Abstract: In this paper, we extend the application of four texture feature extraction methods proposed for the detection of colorectal lesions, into the discrimination of gastric polyps in endoscopic video. Support Vector Machines have been utilized for the texture classification task. The polyp discrimination performance of the surveyed schemes is compared by means of Receiver Operating Characteristics (ROC). The results advocate the feasibility of a computer-based system for polyp detection in video gastroscopy that exploits the textural characteristics of the gastric mucosa in conjunction with its color appearance.

83 citations

Journal ArticleDOI
01 Sep 2007
TL;DR: From the quantification of the results, two major impacts have been derived: higher average accuracy in the delineation of hypoechoic thyroid nodules, which exceeds 91%; and faster convergence when compared with the ACWE model.
Abstract: This paper presents a computer-aided approach for nodule delineation in thyroid ultrasound (US) images. The developed algorithm is based on a novel active contour model, named variable background active contour (VBAC), and incorporates the advantages of the level set region-based active contour without edges (ACWE) model, offering noise robustness and the ability to delineate multiple nodules. Unlike the classic active contour models that are sensitive in the presence of intensity inhomogeneities, the proposed VBAC model considers information of variable background regions. VBAC has been evaluated on synthetic images, as well as on real thyroid US images. From the quantification of the results, two major impacts have been derived: 1) higher average accuracy in the delineation of hypoechoic thyroid nodules, which exceeds 91%; and 2) faster convergence when compared with the ACWE model.

75 citations

Journal ArticleDOI
TL;DR: The new solar radiospectrograph of the University of Athens operating at the Thermopylae Station since 1996 is presented, used either by itself to study the onset and evolution of solar radio bursts or in conjunction with other instruments including the Nançay Decametric Array and the WIND/WAVES RAD1 and RAD2 low frequencyreceivers to study associated interplanetary phenomena.
Abstract: We present the new solar radiospectrograph of the University of Athens operating at the Thermopylae Station since 1996. Observations cover the frequency range from 110 to 688 MHz. The radiospectrograph has a 7-meter parabolic antenna and two receivers operating in parallel. One is a sweep frequency receiver and the other a multichannel acousto-optical receiver. The data acquisition system consists of a front-end VME based subsystem and a Sun Sparc-5 workstation connected through Ethernet. The two subsystems are operated using the VxWorks real-time package. The daily operation is fully automated: pointing of the antenna to the sun, starting and stopping the obser- vations at pre-set times, data acquisition, data compression by 'silence suppression', and archiving on DAT tapes. The instrument can be used either by itself to study the onset and evolution of solar radio bursts or in conjunction with other instruments including the Nancay Decametric Array and the WIND/WAVES RAD1 and RAD2 low frequency receivers to study associated interplanetary phenomena.

59 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: This paper considered four distinct medical imaging applications in three specialties involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner.
Abstract: Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.

2,294 citations

Proceedings Article
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.

2,134 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: The WAVES investigation on the WIND spacecraft will provide comprehensive measurements of the radio and plasma wave phenomena which occur in Geospace as mentioned in this paper, in coordination with the other onboard plasma, energetic particles, and field measurements will help us understand the kinetic processes that are important in the solar wind and in key boundary regions of the Geospace.
Abstract: The WAVES investigation on the WIND spacecraft will provide comprehensive measurements of the radio and plasma wave phenomena which occur in Geospace. Analyses of these measurements, in coordination with the other onboard plasma, energetic particles, and field measurements will help us understand the kinetic processes that are important in the solar wind and in key boundary regions of the Geospace. These processes are then to be interpreted in conjunction with results from the other ISTP spacecraft in order to discern the measurements and parameters for mass, momentum, and energy flow throughout geospace. This investigation will also contribute to observations of radio waves emitted in regions where the solar wind is accelerated. The WAVES investigation comprises several innovations in this kind of instrumentation: among which the first use, to our knowledge, of neural networks in real-time on board a scientific spacecraft to analyze data and command observation modes, and the first use of a wavelet transform-like analysis in real time to perform a spectral analysis of a broad band signal.

810 citations